Start a docker service based on another service - docker

Is there a possibility to start a service on a specific node, based on another running service? (using Docker Swarm)
To make myself a little more clear:
I want to run Nextcloud on a different node than for example, a Typo3, to spare some resources on my Nextcloud node.
How would I write that in a compose?

Look into deploy and using labels:
Example:
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.labels.NextcloudDaemon == true
restart_policy:
condition: any
The above example will run exactly 1 container, and only on the node you've already given the label of "NextcloudDaemon".

Related

Is it possible to specify preferred node-placement for Docker Swarm services?

If I have a Docker Swarm cluster consisting of 5 nodes, I'm aware that I can assign labels to particular nodes to guide which VMs services get deployed to. I.e. in the Docker Stack configuration:
...
deploy:
replicas: 1
placement:
constraints:
- node.labels.service == postgres
Is it possible to specify a preferred placement node? Something like this:
...
deploy:
replicas: 1
placement:
constraints:
- node.labels.service == postgres # This is always the first choice
- node.labels.service == postgres_2 # This is the second choice
i.e. the logic I'm looking for is something along the lines of:
Unless something is wrong with the node labeled postgres, deploy there. If something is wrong with that node (for example; a corrupted file system), then deploy to the node labeled postgres_2. When a node labeled postgres exists again as part of the swarm, redeploy to that node and delete the postgres service on postgres_2

Docker Swarm - Scaling Containers

I have a Docker swarm Environment with 7 nodes(3 master and 4 Workers) I am trying to Deploy a Container and buy requirement is that at any point of time I need 2 instance of this container running but when I scale this the Container should be deployed to a different node than it is currently running.
Ex: say one instance of the container is running in Node 4 and I scale to scale=2 it should run in any other node except for Node 4.
tried this but no luck:
deploy:
mode: global
placement:
constraints:
- node.labels.cloud.type == nodesforservice
We solved this issue with deployment preferences configuration (under Placement section). We set up node.labels.worker, on all our worker nodes. We have 3 workers they have node.labels.worker = worker1, node.labels.worker = worker2 and node.labels.worker = worker3 labels set to each of them. On the docker compose side then we configure it:
placement:
max_replicas_per_node: 2
constraints:
- node.role==worker
preferences:
- spread: node.labels.worker
Note this will not FORCE it always on the separate node, but if it is possible it will do so. So it is not hard limit. Beware of that.

Docker swarm node unable to detect service from another host in swarm

My goal is to set up a docker swarm on a group of 3 linux (ubuntu) physical workstations and run a dask cluster on that.
$ docker --version
Docker version 17.06.0-ce, build 02c1d87
I am able to init the docker swarm and add all of the machines to the swarm.
cordoba$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
j8k3hm87w1vxizfv7f1bu3nfg box1 Ready Active
twg112y4m5tkeyi5s5vtlgrap box2 Ready Active
upkr459m75au0vnq64v5k5euh * box3 Ready Active Leader
I then run docker stack deploy -c docker-compose.yml dask-cluster on the Leader box.
Here is docker-compose.yml:
version: "3"
services:
dscheduler:
image: richardbrks/dask-cluster
ports:
- "8786:8786"
- "9786:9786"
- "8787:8787"
command: dask-scheduler
networks:
- distributed
deploy:
replicas: 1
restart_policy:
condition: on-failure
placement:
constraints: [node.role == manager]
dworker:
image: richardbrks/dask-cluster
command: dask-worker dscheduler:8786
environment:
- "affinity:container!=dworker*"
networks:
- distributed
depends_on:
- dscheduler
deploy:
replicas: 3
restart_policy:
condition: on-failure
networks:
distributed:
and here is richardbrks/dask-cluster:
# Official python base image
FROM python:2.7
# update apt-repository
RUN apt-get update
# only install enough library to run dask on a cluster (with monitoring)
RUN pip install --no-cache-dir \
psutil \
dask[complete]==0.15.2 \
bokeh
When I deploy the swarm, the dworker nodes that are not on the same machine as dscheduler
does not know what dscheduler is. I ssh'd into one of these nodes and looked in env,
and dscheduler was not there. I also tried to ping dscheduler, and got "ping: unknown host".
I thought docker was supposed to provide an internal dns based for service discovery
so that calling dscheduler will take me to the address of the dschedler node.
Is there some set up to my computers that I am missing? or are any of my files missing something?
All of this code is also located in https://github.com/MentalMasochist/dask-swarm
According to this issue in swarm:
Because of some networking limitations (I think related to virtual
IPs), the ping tool will not work with overlay networking. Are you
service names resolvable with other tools like dig?
Personally I could always connect from one service to the other using curl. Your setup seems correct and your services should be able to communicate.
FYI depends on is not supported in swarm
Update 2: I think you are not using the port. Servicename is no replacement for the port. You need to use the port as the container knows it internally.
There was nothing wrong with dask or docker swarm. The problem was bad router firmware. After I went back to a prior version of the router firmware, the cluster worked fine.

How to Set Minimum Container Requirements Using Docker Swarm

In Docker Swarm you can set maximum system requirements like so:
my-service
image: hello-world
deploy:
resources:
limits:
cpus: '2'
memory: 4GB
I have a container that has minimum system requirements of 2 CPU cores and 4GB of RAM which is the exact size of the nodes in my Docker Swarm. This means that when this container is running, it needs to be the only container running on that node.
However, when I run the container alongside others, other containers get placed on the same node. How can I ensure that Docker gives this container a minimum level of CPU and RAM?
Update
I added reservations as suggested by #yamenk, however I still get other containers starting on the same node which causes performance problems for the container I am trying to protect:
my-service
image: hello-world
deploy:
resources:
reservations:
cpus: '2'
memory: 4GB
Update
Apparently the effect of memory reservations in docker swarm are not very well documented and they work as a best effort. To understand the effect of memory reservation flag, check the documentation:
When memory reservation is set, Docker detects memory contention or
low memory and forces containers to restrict their consumption to a
reservation limit.
...
Memory reservation is a soft-limit feature and does not guarantee
the limit won’t be exceeded. Instead, the feature attempts to ensure
that, when memory is heavily contended for, memory is allocated based
on the reservation hints/setup.
To enforce that no other container runs on the same node, you need to set service constraints. What you can do is give nodes in the swarm specific labels and use these labels to scheduel services to run only on nodes that have those specific labels.
As decribed here, node labels can be added to a node using the command:
docker node update --label-add hello-world=yes <node-name>
Then inside your stack file, you can restrict the container to run on nodes only having the specified label, and other container to avoid nodes labeled with hello-world=yes.
my-service:
image: hello-world
deploy:
placement:
constraints:
- node.labels.hello-world == yes
other-service:
...
deploy:
placement:
constraints:
- node.labels.hello-world == no
If you want to start replicas of my-service on multiple nodes, and still have one container running on each node, you need to set the global mode of my-service, and add the same label to nodes where you want a container to run.
The global mode ensures that exactly one container will run each node that satisfies the service constraints:
my-service:
image: hello-world
deploy:
mode: global
placement:
constraints:
- node.labels.hello-world == yes
Old Answer:
You can set resource reservations as such:
version: '3'
services:
redis:
image: redis:alpine
deploy:
resources:
reservations:
cpus: '1'
memory: 20M

Deploy a docker stack on one node (co-schedule containers like docker swarm)

I'm aware that docker-compose with docker-swarm (which is now legacy) is able to co-schedule some services on one node (using dependency filters such as link)
I was wondering if this kind of co-scheduling is possible using modern docker engine swarm mode and the new stack deployment introduced in Docker 1.13
In docker-compose file version 3, links are said to be ignored while deploying a stack in a swarm, so obviously links aren't the solution.
We have a bunch of servers to run batch short-running jobs and the network between them is not very high speed. We want to run each batch job (which consists of multiple containers) on one server to avoid networking overhead. Is this feature implemented in docker stack or docker swarm mode or we should use the legacy docker-swarm?
Also, I couldn't find co-scheduling with another container in the placement policies.
#Roman: You are right.
To deploy to a specific node you need to use placement policy:
version: '3'
services:
job1:
image: example/job1
deploy:
placement:
node.hostname: node-1
networks:
- example
job2:
image: example/job2
deploy:
placement:
node.hostname: node-1
networks:
- example
networks:
example:
driver: overlay
You can still use depends_on
It worth having a look at dockerize too.

Resources