Run a docker container on a specific node in a docker swarm - docker

I have a jenkins pipeline where it build and runs various containers on a NUC server.
This NUC is on a cluster (Swarm) with another NUC.
I recently added a couple of raspberry Pis on my setup and on that cluster, so now I wonder if there is a way to command Jenknis to deploy on x86_x64 or armhf devices.
I tried the -e constraint:node==<node_name> solution I found on other questions, but i had no luck.
I tried the above command from one x86_x64 node pointing to another and from a x86_x64 node pointing to a armhf node
I dont want to run those containers as a service and i dont care of any load balancer, I just want to run a container on a specific architecture (x86_x64 or armhf depending on what i want to deploy)

You cannot use constraints on containers, only on services.
That being said, using constraints on services seems a good way to go.
You can learn more about services constraints in the official documentation.
Here are a few example on how constraints can match node or Docker Engine labels:
# Node ID
node.id==2ivku8v2gvtg4
# Node hostname
node.hostname!=node-2
# Node role
node.role==manager
# Node labels
node.labels.security==high
# Docker Engine's labels
engine.labels.operatingsystem==ubuntu 14.04
If you want to match a specific hostname you need to use node.hostname==<hostname> and not node==<hostname>
You will also want to update the restart_policy key from your service definition deploy policy to prevent from starting a new container once the first one terminates its process successfully.
Wrapping it all together you need something like that:
version: "3.7"
services:
myapp:
image: <my_image>
deploy:
placement:
constraints:
- node.labels.arch == x86_64
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
Of course, it is up to you to add the labels to each Swarm node. Ansible and its module docker-node are really well suited for this purpose. Here is an example playbook:
- hosts: swarm
tasks:
- name: Add label to node specifying architecture
docker_node:
hostname: "{{ inventory_hostname }}"
labels:
arch: "{{ ansible_architecture }}"
Your docker nodes would be labelled with the key arch and one of the following values:
armhf
armv7l
i386
x86_64

Why don't you want to use a service? Services is how we orchestrate things in Swarm. docker container ... will only start/stop/inspect/manipulate things in on a single host. As you already have a swarm cluster set up, why not use a service?

Related

docker stack deploy does not update config

Trying to set up a zero-downtime deployment using docker stack deploy, docker swarm one node localhost environment.
After building image demo:latest, the first deployment using the command docker stack deploy --compose-file docker-compose.yml demo able to see 4 replicas running and can access nginx default home page on port 8080 on my local machine. Now updating index.html, building image with the same name and tag running docker stack deplopy command causing below error and changes are not reflected.
Deleting the deployment and recreating will work, but I am trying to see how can updates rolled in without downtime. Please help here.
Error
Updating service demo_demo (id: wh5jcgirsdw27k0v1u5wla0x8)
image demo:latest could not be accessed on a registry to record
its digest. Each node will access demo:latest independently,
possibly leading to different nodes running different
versions of the image.
Dockerfile
FROM nginx:1.19-alpine
ADD index.html /usr/share/nginx/html/
docker-compose.yml
version: "3.7"
services:
demo:
image: demo:latest
ports:
- "8080:80"
deploy:
replicas: 4
update_config:
parallelism: 2
order: start-first
failure_action: rollback
delay: 10s
rollback_config:
parallelism: 0
order: stop-first
TLDR: push your image to a registry after you build it
Docker swarm doesn't really work without a public or private docker registry. Basically all the nodes need to get their images from the same place, and the registry is the mechanism by which that information is shared. There are other ways to get images loaded on each node in the swarm, but it involves executing the same commands on every node one at a time to load in the image, which isn't great.
Alternatively you could use docker configs for your configuration data and not rebuild the image every time. That would work passably well without a registry, and you can swap out the config data with little-no downtime:
Rotate Configs

Traefik with docker swarm and mode: global: frontend rule to substitute hostname

I have a Docker Swarm cluster (currently 5 machines) where I run everything as a Docker Stack like so, initiating from the host: manager1:
$ docker stack deploy -c docker-compose.yml mystack
But I use Traefik as reverse proxy.
I wanted to add a Syncthing container to share some data between nodes, so I want it to run on each node. This is achieved thanks to the option:
deploy:
mode: global
This properly creates the containers I want, one per node.
I then want to access each Syncthing instance, thanks to Traefik, with unique urls like this:
frontend: manager1.syncthing.mydomain.com --> backend: syncthing container on host manager1
frontend: worker1.syncthing.mydomain.com --> backend: syncthing container on host worker1
frontend: worker2.syncthing.mydomain.com --> backend: syncthing container on host worker2
...
I fail to find the proper configuration for this (it is even possible?).
I thought about substituting variable in the docker-compose like so:
deploy:
...
labels:
...
- "traefik.frontend.rule=Host:${HOSTNAME}.syncthing.mydomain.com"
Even if $HOSTNAME is defined on all nodes (including manager), this fails; Traefik creates a useless route: ".syncthing.mydomain.com". A few researches have shown that this should at least not substitute ${HOSTNAME} for "" (pull/30781) but for "manager1". Anyway, I think this can be safely expected not to work as the substitution would probably be done on the master1 node where the docker stack deploy command is run.
As a workaround, I can deploy one service per node and use placement constraints to deploy one service per node; but this does not scale as new nodes would have to be added manually.
Any help would be greatly appreciated.
PS:
I run everything as arm on raspberry pi.
Docker version 17.05.0-ce, build 89658be
docker-compose version 1.9.0, build 2585387
traefik:cancoillotte

Docker swarm node unable to detect service from another host in swarm

My goal is to set up a docker swarm on a group of 3 linux (ubuntu) physical workstations and run a dask cluster on that.
$ docker --version
Docker version 17.06.0-ce, build 02c1d87
I am able to init the docker swarm and add all of the machines to the swarm.
cordoba$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
j8k3hm87w1vxizfv7f1bu3nfg box1 Ready Active
twg112y4m5tkeyi5s5vtlgrap box2 Ready Active
upkr459m75au0vnq64v5k5euh * box3 Ready Active Leader
I then run docker stack deploy -c docker-compose.yml dask-cluster on the Leader box.
Here is docker-compose.yml:
version: "3"
services:
dscheduler:
image: richardbrks/dask-cluster
ports:
- "8786:8786"
- "9786:9786"
- "8787:8787"
command: dask-scheduler
networks:
- distributed
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.role == manager]
dworker:
image: richardbrks/dask-cluster
command: dask-worker dscheduler:8786
environment:
- "affinity:container!=dworker*"
networks:
- distributed
depends_on:
- dscheduler
deploy:
replicas: 3
restart_policy:
condition: on-failure
networks:
distributed:
and here is richardbrks/dask-cluster:
# Official python base image
FROM python:2.7
# update apt-repository
RUN apt-get update
# only install enough library to run dask on a cluster (with monitoring)
RUN pip install --no-cache-dir \
psutil \
dask[complete]==0.15.2 \
bokeh
When I deploy the swarm, the dworker nodes that are not on the same machine as dscheduler
does not know what dscheduler is. I ssh'd into one of these nodes and looked in env,
and dscheduler was not there. I also tried to ping dscheduler, and got "ping: unknown host".
I thought docker was supposed to provide an internal dns based for service discovery
so that calling dscheduler will take me to the address of the dschedler node.
Is there some set up to my computers that I am missing? or are any of my files missing something?
All of this code is also located in https://github.com/MentalMasochist/dask-swarm
According to this issue in swarm:
Because of some networking limitations (I think related to virtual
IPs), the ping tool will not work with overlay networking. Are you
service names resolvable with other tools like dig?
Personally I could always connect from one service to the other using curl. Your setup seems correct and your services should be able to communicate.
FYI depends on is not supported in swarm
Update 2: I think you are not using the port. Servicename is no replacement for the port. You need to use the port as the container knows it internally.
There was nothing wrong with dask or docker swarm. The problem was bad router firmware. After I went back to a prior version of the router firmware, the cluster worked fine.

Deploy a docker stack on one node (co-schedule containers like docker swarm)

I'm aware that docker-compose with docker-swarm (which is now legacy) is able to co-schedule some services on one node (using dependency filters such as link)
I was wondering if this kind of co-scheduling is possible using modern docker engine swarm mode and the new stack deployment introduced in Docker 1.13
In docker-compose file version 3, links are said to be ignored while deploying a stack in a swarm, so obviously links aren't the solution.
We have a bunch of servers to run batch short-running jobs and the network between them is not very high speed. We want to run each batch job (which consists of multiple containers) on one server to avoid networking overhead. Is this feature implemented in docker stack or docker swarm mode or we should use the legacy docker-swarm?
Also, I couldn't find co-scheduling with another container in the placement policies.
#Roman: You are right.
To deploy to a specific node you need to use placement policy:
version: '3'
services:
job1:
image: example/job1
deploy:
placement:
node.hostname: node-1
networks:
- example
job2:
image: example/job2
deploy:
placement:
node.hostname: node-1
networks:
- example
networks:
example:
driver: overlay
You can still use depends_on
It worth having a look at dockerize too.

Can Kubernetes be used like Docker Compose?

I have been digging through the Kubernetes documentation for hours. I understand the core design, and the notion of services, controllers, pods, etc.
What I don't understand, however, is the process in which I can declaratively configure the cluster. That is, a way for me to write a config file (or a set thereof) to define the makeup, and scaling options of the cloud deployment. I want to be able to declare which containers I want in which pods, how they will communicate, how they will scale, etc. without running a ton of cli commands.
Is there docker-compose functionality for Kubernetes?
I want my application to be defined in git—to be version controlled–without relying on manual cli interactions.
Is this possible to do in a concise way? Is there a reference that is more clear than the official documentation?
If you're still looking, maybe this tool can help: https://github.com/kelseyhightower/compose2kube
You can create a compose file:
# sample compose file with 3 services
web:
image: nginx
ports:
- "80"
- "443"
database:
image: postgres
ports:
- "5432"
cache:
image: memcached
ports:
- "11211"
Then use the tool to convert it to kubernetes objects:
compose2kube -compose-file docker-compose.yml -output-dir output
Which will create these files:
output/cache-rc.yaml
output/database-rc.yaml
output/web-rc.yaml
Then you can use kubectl to apply them to kubernetes.
If you have existing Docker Composer files, you may take a look at the Kompose project.
kompose is a tool to help users who are familiar with docker-compose move to Kubernetes. kompose takes a Docker Compose file and translates it into Kubernetes resources.
kompose is a convenience tool to go from local Docker development to managing your application with Kubernetes. Transformation of the Docker Compose format to Kubernetes resources manifest may not be exact, but it helps tremendously when first deploying an application on Kubernetes.
To run docker-compose.yaml file or your own, run:
kompose up
To convert docker-compose.yaml into Kubernetes deployments and services with one simple command:
$ kompose convert -f docker-compose.yaml
INFO Kubernetes file "frontend-service.yaml" created
INFO Kubernetes file "redis-master-service.yaml" created
INFO Kubernetes file "redis-slave-service.yaml" created
INFO Kubernetes file "frontend-deployment.yaml" created
INFO Kubernetes file "redis-master-deployment.yaml" created
INFO Kubernetes file "redis-slave-deployment.yaml" created
For more info, check: http://kompose.io/
Docker has officially announced the docker-compose functionality for the kubernetes cluster. So from now on you can compose the kubernetes resources in a file and apply them using that single file.
First we need to install the Compose on Kubernetes controller into your Kubernetes cluster. This controller uses the standard Kubernetes extension points to introduce the Stack to the Kubernetes API. Check the full documentation to install the docker compose controller:
https://github.com/docker/compose-on-kubernetes
Let's write a simple compose yaml file:
version: "3.7"
services:
web:
image: dockerdemos/lab-web
ports:
- "33000:80"
words:
image: dockerdemos/lab-words
deploy:
replicas: 3
endpoint_mode: dnsrr
db:
image: dockerdemos/lab-db
We’ll then use the docker client to deploy this to a Kubernetes cluster running the controller:
$ docker stack deploy --orchestrator=kubernetes -c docker-compose.yml words
Waiting for the stack to be stable and running...
db: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
web: Ready [pod status: 1/1 ready, 0/1 pending, 0/1 failed]
words: Ready [pod status: 1/3 ready, 2/3 pending, 0/3 failed]
Stack words is stable and running
We can then interact with those objects via the Kubernetes API. Here you can see we’ve created the lower-level objects like Services, Pods, Deployments and ReplicaSets automatically:
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/db 1 1 1 1 57s
deployment.apps/web 1 1 1 1 57s
deployment.apps/words 3 3 3 3 57s
It’s important to note that this isn’t a one-time conversion. The Compose on Kubernetes API Server introduces the Stack resource to the Kubernetes API. So we can query and manage everything at the same level of abstraction as we’re building the application. That makes delving into the details above useful for understanding how things work, or debugging issues, but not required most of the time:
$ kubectl get stack
NAME STATUS PUBLISHED PORTS PODS AGE
words Running 33000 5/5 4m
Kubernetes certainly has its own yaml (as shown in "Deploying Applications")
But as "Docker Clustering Tools Compared: Kubernetes vs Docker Swarm", it was not written (just) for Docker, and it has its own system.
You could use docker-compose to start Kubernetes though, as shown in "vyshane/kid": that does mask some of the kubectl commands cli in scripts (which can be versioned).

Resources