Deploying Docker Swarm mode services across multiple failure domains - docker

I am new to docker swarm mode (and I specifically am talking about swarm mode in docker v1.12, and I do not mean the older non integrated 'docker swarm').
I am attempting to evaluate its suitability to build a large distributed containerised platform for a new software project (I'm comparing to similar technologies such as mesosphere, kubernetes et al).
My understanding of the older non-integrated docker swarm (not swarm mode) is that you could target nodes to deploy to multiple failure domains by using filters. Is there an equivalent in Docker Swarm mode?
For example in a test environment, I have 6 VM's - all running docker.
I start up VM 1 and 2 and call that my failure domain1
I start up VM 3 and 4 and call that my failure domain2
I start up VM 5 and 6 and call that my failure domain3
All failure domains consist of one swarm manager and one swarm worker. In effect, I have 2 nodes per domain that can host service containers.
I tell docker to create a new service, and run 3 containers based upon an image containing a simple web service. Docker does it's thing and spins up 3 containers and my service is running; I can access my load balanced web service without any problems. Hurrah!
However, I'd like to specifically tell docker to distribute my 3 containers across domain1, domain2 and domain3.
How can I do this? (also - am I posting on the correct site - should this be on one of the other stackexchange sites?)

You can continue to use engine labels as you have before. Or with the new swarm you can defined node labels on the swarm nodes. Then, with the new docker swarm, you would define a constraint on your service and create 3 separate services, each constrained to run in one of your failure domains.
For node labels, you'd use docker node update --label-add az=1 vm1 which would add label az1 to your vm1 node. Repeat this process for your other AZ's (availability zone is the term I tend to use) and VM's.
Now when scheduling your job, you add a constraint like
docker service create --constraint node.labels.az==1 \
--name AppAZ1 yourimage
for a node label or for an engine label:
docker service create --constraint engine.labels.az==1 \
--name AppAZ1 yourimage
repeating this for each of your AZ's.
Unfortunately I can't think of a way to force a spread across each of the AZ's automatically when you use something like a --replicas 3 that includes failover to the second node in each vm cluster. However, if you selecte a single VM per cluster for each task, you could label each of them my the same label (e.g. --label-add vm=a, and then do a --mode global --constraint node.label.vm==a to run one service on each of your A nodes.

Related

difference between docker service and docker container

I can create a docker container by command
docker run <<image_name>>
I can create a service by command
docker service create <<image_name>>
What is the difference between these two in behaviour?
When would I need to create a service over container?
docker service command in a docker swarm replaces the docker run. docker run has been built for single host solutions. Its whole idea is to focus on local containers on the system it is talking to. Whereas in a cluster the individual containers are irrelevant. We simply use swarm services to manage the multiple containers in a cluster. Swarm will orchestrate the containers of the services for us.
docker service create is mainly to be used in docker swarm mode. docker run does not have the concept of scaling up/down. With docker service create you can specify the number of replicas to be created using the --replicas command. This will create and manage multiple replicas of a containers in many different nodes. There are several such options for managing multiple containers using docker service create and other commands under docker service ...
One more note: docker services are for container orchestration systems(swarm). It has built in facility for failure recovery. ie. it recreates a container on failure. docker runwould never recreate a container if it fails. When the docker service commands are used we are not directly asking to perform action like "create a single container", rather we are saying to the orchestration system to "put this job in your queue and when you can get to it perform that action on the swarm". This means it has rollback facilities, failure mitigation and lots of intelligence built in.
You need to consider using docker service create when in swarm mode and docker run when not in swarm mode. You can lookup on docker swarms to understand docker services.
There is no real difference. In the official documentation you can read "Services are really just containers in production".
Services can be declared in "docker-compose.yml" and can be started from it. Once started, they will run as containers.
It is just a common way to name parts of your stack.

Docker Swarm Mode - Show containers per node

I am using Docker version 17.12.1-ce.
I have set up a swarm with two nodes, and I have a stack running on the manager, while I am to instantiate new nodes on the worker (not within a service, but as stand-alone containers).
So far I have been unable to find a way to instantiate containers on the worker specifically, and/or to verify that the new container actually got deployed on the worker.
I have read the answer to this question which led me to run containers with the -e option specifying constraint:Role==worker, constraint:node==<nodeId> or constraint:<custom label>==<value>, and this github issue from 2016 showing the docker info command outputting just the information I would need (i.e. how many containers are on each node at any given time), however I am not sure if this is a feature of the stand-alone swarm, since docker info only the number of nodes, but no detailed info for each node. I have also tried with docker -D info.
Specifically, I need to:
Manually specify which node to deploy a stand-alone container to (i.e. not related to a service).
Check that a container is running on a specific swarm node, or check how many containers are running on a node.
Swarm commands will only care/show service-related containers. If you create one with docker run, then you'll need to use something like ssh node2 docker ps to see all containers on that node.
I recommend you do your best in a Swarm to have all containers as part of a service. If you need a container to run on nodeX, then you can create a service with a "node constraint" using labels and constraints. In this case you could restrict the single replica of that service to a node's hostname.
docker service create --constraint Node.Hostname==swarm2 nginx
To see all tasks on a node from any swarm manager:
docker node ps <nodename_or_id>

Adding new containers to existing cluster (sworm)

I am having a problem trying to implement the best way to add new container to an existing cluster while all containers run in docker.
Assuming I have a docker swarm, and whenever a container stops/fails for some reason, the swarm bring up new container and expect it to add itself to the cluster.
How can I make any container be able to add itself to a cluster?
I mean, for example, if I want to create a RabbitMQ HA cluster, I need to create a master, and then create slaves, assuming every instance of RabbitMQ (master or slave) is a container, let's now assume that one of them fails, we have 2 options:
1) slave container has failed.
2) master container has failed.
Usually, a service which have the ability to run as a cluster, it also has the ability to elect a new leader to be the master, so, assuming this scenerio is working seemlesly without any intervention, how would a new container added to the swarm (using docker swarm) will be able to add itself to the cluster?
The problem here is, the new container is not created with new arguments every time, the container is always created as it was deployed first time, which means, I can't just change it's command line arguments, and this is a cloud, so I can't hard code an IP to use.
Something here is missing.
Maybe trying to declare a "Service" in the "docker Swarm" level, will acctualy let the new container the ability to add itself to the cluster without really knowing anything the other machines in the cluster...
There are quite a few options for scaling out containers with Swarm. It can range from being as simple as passing in the information via a container environment variable to something as extensive as service discovery.
Here are a few options:
Pass in IP as container environment variable. e.g. docker run -td -e HOST_IP=$(ifconfig wlan0 | awk '/t addr:/{gsub(/.*:/,"",$2);print$2}') somecontainer:latest
this would set the internal container environment variable HOST_IP to the IP of the machine it was started on.
Service Discovery. Querying a known point of entry to determine the information about any required services such as IP, Port, ect.
This is the most common type of scale-out option. You can read more about it in the official Docker docs. The high level overview is that you set up a service like Consul on the masters, which you have your services query to find the information of other relevant services. Example: Web server requires DB. DB would add itself to Consul, the web server would start up and query Consul for the databases IP and port.
Network Overlay. Creating a network in swarm for your services to communicate with each other.
Example:
$ docker network create -d overlay mynet
$ docker service create –name frontend –replicas 5 -p 80:80/tcp –network mynet mywebapp
$ docker service create –name redis –network mynet redis:latest
This allows the web app to communicate with redis by placing them on the same network.
Lastly, in your example above it would be best to deploy it as 2 separate containers which you scale individually. e.g. Deploy one MASTER and one SLAVE container. Then you would scale each dependent on the number you needed. e.g. to scale to 3 slaves you would go docker service scale <SERVICE-ID>=<NUMBER-OF-TASKS> which would start the additional slaves. In this scenario if one of the scaled slaves fails swarm would start a new one to bring the number of tasks back to 3.
https://docs.docker.com/engine/reference/builder/#healthcheck
Docker images have a new layer for health check.
Use a health check layer in your containers for example:
RUN ./anyscript.sh
HEALTHCHECK exit 1 or (Any command you want to add)
HEALTHCHECK check the status code of command 0 or 1 and than result as
1. healthy
2. unhealthy
3. starting etc.
Docker swarm auto restart the unhealthy containers in swarm cluster.

How to run same container on all Docker Swarm nodes

I'm just getting my feet wet with Docker Swarm because we're looking at ways to configure our compute cluster to make it more containerized.
Basically we have a small farm of 16 computers and I'd like to be able to have each node pull down the same image, run the same container, and accept jobs from an OpenMPI program running on a master node.
Nothing is really OpenMPI specific about this, just that the containers have to be able to open SSH ports and the master must be able to log into them. I've got this working with a single Docker container and it works.
Now I'm learning Docker Machine and Docker Swarm as a way to manage the 16 nodes. From what I can tell, once I set up a swarm, I can then set it as the DOCKER_HOST (or use -H) to send a "docker run", and the swarm manager will decide which node runs the requested container. I got this basically working using a simple node list instead of messing with discovery services, and so far so good.
But I actually want to run the same container on all nodes in one command. Is this possible?
Docker 1.12 introduced global services and passing --mode global to run command Docker will schedule service to all nodes.
Using Docker Swarm you can use labels and negative affinity filters to gain the same result:
openmpi:
environment:
- "affinity:container!=*openmpi*"
labels:
- "com.myself.name=openmpi"

Creating multiple Docker container

I have to create a huge number of Docker container on different hosts (e.g. 50 container each on 3 hosts). These container all have the same image, configuration etc. and only the network address and ID of each container should be different (so basically I want to create a huge virtual container network).
Is there a way to achieve this?
I have looked at technologies like Helios and Kubernetes but they seem to only deploy one container on each agent. I thought about just creating a lot of different jobs in Helios and then deploy each one of them to its agent, but that seems a little dirty to me.
This is exactly the type of use case that Kubernetes is well suited for.
You should use a Replica Set. When creating your Replica Set, you specify a template that tells the system how to create each container instance along with a desired number of replicas. It will then create that number of replicas in the available number of nodes in your cluster.
One caveat is that by default, Kubernetes will only allow you to have ~100 pods / node, but you can change this number with a command line flag if you need more.
For the Docker specific solution, you can use Swarm and Compose.
Create your Docker Swarm cluster of 3 nodes and change your environment to that Swarm. (The below assumes each host is listening on 2375, which is ok for a private network, but you'll want TLS setup and switch over to 2376 for more security.)
cat >cluster.txt <<EOF
node1:2375
node2:2375
node3:2375
EOF
docker run -d -P --restart=always --name swarm-manager \
-v ./cluster.txt:/cluster.txt \
swarm manage file:///cluster.txt
export DOCKER_HOST=$(docker port swarm-manager 2375)
Define your service inside of a docker-compose.yml, and then run docker-compose scale my-service=150. If your Swarm is setup with the default spread strategy, it will distribute them across the 3 hosts based on the number of containers running (or stopped) on each.
cat >docker-compose.yml <<EOF
my-app:
image: my-app
EOF
docker-compose scale my-app=150
Note that the downside of docker-compose over the other tools out there is that it doesn't correct for outages until you rerun it.

Resources