Connect to docker containers with just service name and task slot? - docker

I’m attempting to use docker swarm to manage a cluster of CouchDB 2 nodes where each node is a service. It is great that I can use the container name, which is comprised of {{.Service.Name}}.{{.Task.Slot}}.{{.Node.ID}} to connect the nodes, e.g. couchdb.1.52286blz1o0c7ym508sne6jyg. The issue is that the Node ID is a dynamic value and therefore my node names are constantly changing. This dynamic Node ID doesn’t play well with CouchDB’s node configuration, i.e. you really need static names for your nodes like couchdb.1, couchdb.2, etc…
Is there a way to configure the containers so that they can connect to each other with just {{.Service.Name}}.{{.Task.Slot}}? This way, if a node dies and is restarted, the restarted node will retake the dead node’s place automatically.
Docker run appears to have a --network-alias command that can be used to achieve this configuration, but nothing like --network-alias appears to be available for docker swarm.

Related

How Swarm mode image orchestration works?

I have setup a 3 node cluster (with no Internet access) with 1 manager and 2 worker-nodes using the standard swarm documentation.
How does the swarm manager in swarm mode know about the images present in worker nodes?
Lets say I have image A in worker-node-1 and image B in worker-node-2 and no images in the manager-node.
Now how do I start container for image A using the manager?
Will it start in manager or node-1?
When I query manager for the list of images will it give the whole list with A and B in it?
Does anyone know how this works?
I couldn’t get the details from the documentation.
Docker Swarm manager node may to be a worker one by the second role but not strictly necessary.
Image deployment policy is mapped via docker-compose.yml which has an information like target nodes, networks, hostnames, volumes, etc. in relation of particular service. So, it will start either in specified node or in emptiest default one.
Swarm manager communicates with the worker nodes via Docker networks:
When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:
an overlay network called ingress, which handles control and data
traffic related to swarm services. When you create a swarm service and
do not connect it to a user-defined overlay network, it connects to
the ingress network by default
a bridge network called
docker_gwbridge, which connects the individual Docker daemon to the
other daemons participating in the swarm.
Reference
During Swarm deployment, the images of it's services are being propagated to worker nodes according to their deployment policy.
The manager node will contain images once the node is the worker one too (correct me, if it won't).
The default configuration with swarm mode is to pull images from a registry server and use pinning to reference a unique hash for those images. This can be adjusted, but there is no internal mechanism to distribute images within a cluster.
For an offline environment, I'd recommend a stand alone registry server accessible to the cluster. You can even run it on the cluster. Push your image there, and point your server l services to the registry for their images to pull. See this doc for details on running a stand alone registry, or any of the many 3rd party options (e.g. Harbor): https://docs.docker.com/registry/
The other option is to disable the image pinning, and manually copy images to each of your swarm nodes. You need to do this in advance of deploying any service changes. You'll also lose the benefit of reused image layers when you manually copy them. Because of all this issues it creates, overhead to manage, and risk of mistakes, I'd recommend against this option.
Run the docker stack deploy command with --with-registry-auth that will give the Workers access to pull the needed image
By default Docker Swarm will pull the latest image from registry when deploying

Docker Swarm: will restarting a single node manager delete all my services?

I have a single node Docker Swarm setup with a dozen services created by simply calling docker service create [...].
Can anyone tell me what will happen to my services if I reboot my node? WIll they automatically restart or will I have to recreate them all?
I undestand that Swarm services and docker-compose setups are different, but in the case of having to recreate the services upon reboot, is there a way to save a docker-compose.yml file for each of my services (i.e. something that parses the output of docker service inspect)? Is there a better way of "saving" my services configuration?
No need to recreate the services,it will remains same even after the node restart. I have tested the same in my swarm cluster. i have three node swarm setup (1 manager & 2 worker). completely stopped the worker nodes and services on worker node moved to the active node(manager node). I have restarted the active node(manager) and still i can see the services are up and running on the manager node.
before restart:
enter image description here
After Restart:
enter image description here
So Even if you are running one node swarm,no need to worry about the services, it will automatically recreated automatically. Attached the screen shots for your reference.

Docker Swarm Mode - Show containers per node

I am using Docker version 17.12.1-ce.
I have set up a swarm with two nodes, and I have a stack running on the manager, while I am to instantiate new nodes on the worker (not within a service, but as stand-alone containers).
So far I have been unable to find a way to instantiate containers on the worker specifically, and/or to verify that the new container actually got deployed on the worker.
I have read the answer to this question which led me to run containers with the -e option specifying constraint:Role==worker, constraint:node==<nodeId> or constraint:<custom label>==<value>, and this github issue from 2016 showing the docker info command outputting just the information I would need (i.e. how many containers are on each node at any given time), however I am not sure if this is a feature of the stand-alone swarm, since docker info only the number of nodes, but no detailed info for each node. I have also tried with docker -D info.
Specifically, I need to:
Manually specify which node to deploy a stand-alone container to (i.e. not related to a service).
Check that a container is running on a specific swarm node, or check how many containers are running on a node.
Swarm commands will only care/show service-related containers. If you create one with docker run, then you'll need to use something like ssh node2 docker ps to see all containers on that node.
I recommend you do your best in a Swarm to have all containers as part of a service. If you need a container to run on nodeX, then you can create a service with a "node constraint" using labels and constraints. In this case you could restrict the single replica of that service to a node's hostname.
docker service create --constraint Node.Hostname==swarm2 nginx
To see all tasks on a node from any swarm manager:
docker node ps <nodename_or_id>

Adding new containers to existing cluster (sworm)

I am having a problem trying to implement the best way to add new container to an existing cluster while all containers run in docker.
Assuming I have a docker swarm, and whenever a container stops/fails for some reason, the swarm bring up new container and expect it to add itself to the cluster.
How can I make any container be able to add itself to a cluster?
I mean, for example, if I want to create a RabbitMQ HA cluster, I need to create a master, and then create slaves, assuming every instance of RabbitMQ (master or slave) is a container, let's now assume that one of them fails, we have 2 options:
1) slave container has failed.
2) master container has failed.
Usually, a service which have the ability to run as a cluster, it also has the ability to elect a new leader to be the master, so, assuming this scenerio is working seemlesly without any intervention, how would a new container added to the swarm (using docker swarm) will be able to add itself to the cluster?
The problem here is, the new container is not created with new arguments every time, the container is always created as it was deployed first time, which means, I can't just change it's command line arguments, and this is a cloud, so I can't hard code an IP to use.
Something here is missing.
Maybe trying to declare a "Service" in the "docker Swarm" level, will acctualy let the new container the ability to add itself to the cluster without really knowing anything the other machines in the cluster...
There are quite a few options for scaling out containers with Swarm. It can range from being as simple as passing in the information via a container environment variable to something as extensive as service discovery.
Here are a few options:
Pass in IP as container environment variable. e.g. docker run -td -e HOST_IP=$(ifconfig wlan0 | awk '/t addr:/{gsub(/.*:/,"",$2);print$2}') somecontainer:latest
this would set the internal container environment variable HOST_IP to the IP of the machine it was started on.
Service Discovery. Querying a known point of entry to determine the information about any required services such as IP, Port, ect.
This is the most common type of scale-out option. You can read more about it in the official Docker docs. The high level overview is that you set up a service like Consul on the masters, which you have your services query to find the information of other relevant services. Example: Web server requires DB. DB would add itself to Consul, the web server would start up and query Consul for the databases IP and port.
Network Overlay. Creating a network in swarm for your services to communicate with each other.
Example:
$ docker network create -d overlay mynet
$ docker service create –name frontend –replicas 5 -p 80:80/tcp –network mynet mywebapp
$ docker service create –name redis –network mynet redis:latest
This allows the web app to communicate with redis by placing them on the same network.
Lastly, in your example above it would be best to deploy it as 2 separate containers which you scale individually. e.g. Deploy one MASTER and one SLAVE container. Then you would scale each dependent on the number you needed. e.g. to scale to 3 slaves you would go docker service scale <SERVICE-ID>=<NUMBER-OF-TASKS> which would start the additional slaves. In this scenario if one of the scaled slaves fails swarm would start a new one to bring the number of tasks back to 3.
https://docs.docker.com/engine/reference/builder/#healthcheck
Docker images have a new layer for health check.
Use a health check layer in your containers for example:
RUN ./anyscript.sh
HEALTHCHECK exit 1 or (Any command you want to add)
HEALTHCHECK check the status code of command 0 or 1 and than result as
1. healthy
2. unhealthy
3. starting etc.
Docker swarm auto restart the unhealthy containers in swarm cluster.

How to run same container on all Docker Swarm nodes

I'm just getting my feet wet with Docker Swarm because we're looking at ways to configure our compute cluster to make it more containerized.
Basically we have a small farm of 16 computers and I'd like to be able to have each node pull down the same image, run the same container, and accept jobs from an OpenMPI program running on a master node.
Nothing is really OpenMPI specific about this, just that the containers have to be able to open SSH ports and the master must be able to log into them. I've got this working with a single Docker container and it works.
Now I'm learning Docker Machine and Docker Swarm as a way to manage the 16 nodes. From what I can tell, once I set up a swarm, I can then set it as the DOCKER_HOST (or use -H) to send a "docker run", and the swarm manager will decide which node runs the requested container. I got this basically working using a simple node list instead of messing with discovery services, and so far so good.
But I actually want to run the same container on all nodes in one command. Is this possible?
Docker 1.12 introduced global services and passing --mode global to run command Docker will schedule service to all nodes.
Using Docker Swarm you can use labels and negative affinity filters to gain the same result:
openmpi:
environment:
- "affinity:container!=*openmpi*"
labels:
- "com.myself.name=openmpi"

Resources