docker swarm . Run some service only once per node - docker

I use docker version
Client:
Version: 18.06.1-ce
API version: 1.38
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:24:51 2018
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.06.1-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:23:15 2018
OS/Arch: linux/amd64
Experimental: false
And docker-compose for configuring my services. I want to have 3 replicas of a server that run on some specially labeled 3 nodes. For this I use yaml configuration like :
version: '3.7'
services:
ser:
deploy:
placement:
constraints:
- "node.labels.cloud.type == nodesforservice"
replicas: 3
restart_policy:
condition: any
rollback_config:
delay: 0s
parallelism: 0
update_config:
delay: 6s
parallelism: 1
environment:
- affinity:service!=stackname_servicename
image: service:latest
and deploy this configuration via
docker stack deploy --compose-file docker-stack.yml stackname
But I have found out that affinity:service!=stackname_servicename does not work properly ( or does not work at all ). It works only in the deprecated standalone mode. If there are only 2 nodes currently available the service will be deployed to some node twice. And it is what I try to avoid.
Is there is any possibility in a docker swarm explicitly say that 2 containers of the same service are not alowed ? I have found only posibility to create global services with --mode global but I need only 3 instances and not more.

if you are using docker create service, you can use --replicas-max-per-node 1 to enforce 1:1 relationship between container and node.
if you are using compose file you can declare max_replicas_per_node under:
deploy:
placement:
max_replicas_per_node: 1
if you need further control on which node can run the container using label, place the label matching at constraint block under placement.
More details here: Compose and Docker compatibility matrix

This is a rather old thread, but using a node label as placement constraint in combination with global mode deployment does the trick.
deploy:
mode: global
placement:
constraints:
- node.labels.cloud.type == nodesforservice
Of course the node label "cloud.type=nodesforservice" needs to be applied to the desired number of nodes.
For Docker swarm there is no such thing as affinity.

Related

Docker port has unwanted port declaration

I am using Docker latest version, here is the output of "docker version"
docker version
Client:
Cloud integration: 1.0.14
Version: 20.10.6
API version: 1.41
Go version: go1.16.3
Git commit: 370c289
Built: Fri Apr 9 22:46:57 2021
OS/Arch: darwin/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.6
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: 8728dd2
Built: Fri Apr 9 22:44:56 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.4
GitCommit: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc:
Version: 1.0.0-rc93
GitCommit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
docker-init:
Version: 0.19.0
GitCommit: de40ad0
I run a simple python flask images as followed https://docs.docker.com/language/python/build-images/
docker run --publish 5000:5000 python-docker-test
My container has up and run without any problem. The problem is I observed an addition port declaration as below
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8e8188fe2db3 python-docker-test "python3 -m flask ru…" 4 seconds ago Up 3 seconds 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp test_docker_python-docker-test_1
Or more specifically:
PORTS
0.0.0.0:5000->5000/tcp, :::5000->5000/tcp
Output of docker port command
~$ docker port test_docker_python-docker-test_1 5000
0.0.0.0:5000
:::5000
Question is: Why do we have such :::5000 or generally :::<port_num>? Can we avoid this ?
Problem that I have is my bash script that fetch the output docker port need to be modified a bit. It's not a big deal. I just curious if there are some update in Docker 20.10.3.
Thanks
Alex
0.0.0.0 is the wildcard address in IPv4.
:: is the wildcard address of IPv6.
Docker does it so that it can receive requests from both IPv4 and IPv6 network interfaces.
To only bind port in the IPv4 interface, you have to specify the network interface explicitly.
docker run --publish 0.0.0.0:5000:5000 python-docker-test
Docker doc about networking

docker-compose spec says cpus option is deprecated but docker run says use --cpus

Reading docker-compose spec (https://github.com/compose-spec/compose-spec/blob/master/spec.md#cpus) it says that cpus option is DEPRECATED, so even if it still works when I use it, I think it is not a great idea.
cpus
DEPRECATED: use deploy.reservations.cpus
cpus define the number of (potentially virtual) CPUs to allocate to service containers. This is a fractional number. 0.000 means no limit.
On the other hand, docker run docs says (https://docs.docker.com/config/containers/resource_constraints/#configure-the-default-cfs-scheduler) that --cpus option is more convenient.
Specify the CPU CFS scheduler period, which is used alongside --cpu-quota. Defaults to 100000 microseconds (100 milliseconds). Most users do not change this from the default. For most use-cases, --cpus is a more convenient alternative.
So I'm very confused: should or shouldn't i use cpus in docker-compose? if don't, how do I efficiently control a service cpu usage?
I'm using docker without swarm.
Context
$ docker version
Client:
Version: 20.10.2
API version: 1.41
Go version: go1.13.8
Git commit: 20.10.2-0ubuntu1~20.04.2
Built: Tue Mar 30 21:24:57 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.2
API version: 1.41 (minimum version 1.12)
Go version: go1.13.8
Git commit: 20.10.2-0ubuntu1~20.04.2
Built: Mon Mar 29 19:10:09 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.3.3-0ubuntu2.3
GitCommit:
runc:
Version: spec: 1.0.2-dev
GitCommit:
docker-init:
Version: 0.19.0
GitCommit:
$ docker-compose version
docker-compose version 1.29.2, build 5becea4c
docker-py version: 5.0.0
CPython version: 3.7.10
OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019
you should use deploy.reservations.cpus as the warning suggests, example:
https://github.com/compose-spec/compose-spec/blob/master/deploy.md#cpus
services:
frontend:
image: awesome/webapp
deploy:
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
so what it means is cpus should be under reservations, under deploy.

docker stack communicate between containers

I'm trying to setup a swarm using docker but I'm having issues with communicating between containers.
I have cluster with 5 nodes. 1 manager and 4 workers.
3 apps: redis, splash, myapp
myapp has to be on the 4 workers
redis, splash just on the manager
myapp has to be able to communicate with redis and splash
I tried using the container name but its not working. It resolves the container name to different IPs.
ping splash # return a different ip than the container actually has
I am deploying running the swarm using docker stack
docker stack deploy -c docker-stack.yml myapp
Linking container between them also doesn't work.
Any ideas ? Am I missing something ?
root#swarm-manager:~# docker version
Client:
Version: 17.09.0-ce
API version: 1.32
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:42:18 2017
OS/Arch: linux/amd64
Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:40:56 2017
OS/Arch: linux/amd64
Experimental: false
docker-stack.yml contains:
version: "3"
services:
splash:
container_name: splash
image: scrapinghub/splash
ports:
- 8050:8050
- 5023:5023
deploy:
mode: global
placement:
constraints:
- node.role == manager
redis:
container_name: redis
image: redis
ports:
- 6379:6379
deploy:
mode: global
placement:
constraints:
- node.role == manager
myapp:
container_name: myapp
image: myapp_image:latest
environment:
REDIS_ENDPOINT: redis:6379
SPLASH_ENDPOINT: splash:8050
deploy:
mode: global
placement:
constraints:
- node.role == worker
entrypoint:
- ping google.com
---- EDIT ----
I tried with curl also. Didn't work.
docker stack deploy -c docker-stack.yml myapp
Creating network myapp_default
Creating service myapp_splash
Creating service myapp_redis
Creating service myapp_myapp
curl http://myapp_splash:8050
curl: (7) Failed to connect to myapp_splash port 8050: No route to host
curl http://splash:8050
curl: (7) Failed to connect to splash port 8050: No route to host
What worked is getting the actual container name of splash, which is some random generated string.
curl http://myapp_splash.d7bn0dpei9ijpba4q41vpl4zz.tuk1cimht99at9g0au8vj9lkz:8050
But this doesn't really help me.
Ping is not the proper tool to try and connect services. For some reason it doesn't work with docker networks. Try curl http://serviceName instead.
Other than that: Containers can't be named when using stack deploy, instead your service name is used (which coincidentally is the same) to access another service.
I manage to get it working using curl http://tasks.splash:8050 or http://tasks.myapp_splash:8050.
I don't know whats is causing this issue though. Feel free to comment with an answer.
It seems that containers in stack named tasks.<service name> so the command ping tasks.myservice works for me!
Itersting point to note that names like <stackname>_<service name> will also resolve and ping'able but IP address is incorrect. This is frustarating.
(For exmple if you do docker stack deploy -c my.yml AA you'll get name like AA_myservice which will resolve to incorrect addreses)
To add to above answer. From network point of view curl and ping do the same things. Both will try to resolve name passed to them and then curl will try to connect using specified protocol (http is the example above) and ping will send ICMP echo requests.

Mesos Slave - Docker compose

Am using mesos version 1.0.3. Just installed mesos thru
docker pull mesosphere/mesos-master:1.0.3
docker pull mesosphere/mesos-salve:1.0.3
Using docker-compose to start mesos-master and mesos-slave.
docker-compose file,
services:
#
# Zookeeper must be provided externally
#
#
# Mesos
#
mesos-master:
image: mesosphere/mesos-master:1.0.3
restart: always
privileged: true
network_mode: host
volumes:
- ~/mesos-data/master:/tmp/mesos
environment:
MESOS_CLUSTER: "mesos-cluster"
MESOS_QUORUM: "1"
MESOS_ZK: "zk://localhost:2181/mesos"
MESOS_PORT: 5000
MESOS_REGISTRY_FETCH_TIMEOUT: "2mins"
MESOS_EXECUTOR_REGISTRATION_TIMEOUT: "2mins"
MESOS_LOGGING_LEVEL: INFO
MESOS_INITIALIZE_DRIVER_LOGGING: "false"
mesos-slave1:
image: mesosphere/mesos-slave:1.0.3
depends_on: [ mesos-master ]
restart: always
privileged: true
network_mode: host
volumes:
- ~/mesos-data/slave-1:/tmp/mesos
- /sys/fs/cgroup:/sys/fs/cgroup
- /var/run/docker.sock:/var/run/docker.sock
environment:
MESOS_CONTAINERIZERS: docker
MESOS_MASTER: "zk://localhost:2181/mesos"
MESOS_PORT: 5051
MESOS_WORK_DIR: "/var/lib/mesos/slave-1"
MESOS_LOGGING_LEVEL: WARNING
MESOS_INITIALIZE_DRIVER_LOGGING: "false"
Mesos master runs fine without any issues. But the slave is not starting with the below error. Not sure, what else is missing here.
I0811 21:38:28.952507 1 main.cpp:243] Build: 2017-02-13 08:10:42 by ubuntu
I0811 21:38:28.952599 1 main.cpp:244] Version: 1.0.3
I0811 21:38:28.952601 1 main.cpp:247] Git tag: 1.0.3
I0811 21:38:28.952603 1 main.cpp:251] Git SHA: c673fdd00e7f93ab7844965435d57fd691fb4d8d
SELinux: Could not open policy file <= /etc/selinux/targeted/policy/policy.29: No such file or directory
2017-08-11 21:38:29,062:1(0x7f4f78d0d700):ZOO_INFO#log_env#726: Client environment:zookeeper.version=zookeeper C client 3.4.8
2017-08-11 21:38:29,062:1(0x7f4f78d0d700):ZOO_INFO#log_env#730: Client environment:host.name=<HOST_NAME>
2017-08-11 21:38:29,062:1(0x7f4f78d0d700):ZOO_INFO#log_env#737: Client environment:os.name=Linux
2017-08-11 21:38:29,062:1(0x7f4f78d0d700):ZOO_INFO#log_env#738: Client environment:os.arch=3.8.13-98.7.1.el7uek.x86_64
2017-08-11 21:38:29,062:1(0x7f4f78d0d700):ZOO_INFO#log_env#739: Client environment:os.version=#2 SMP Wed Nov 25 13:51:41 PST 2015
2017-08-11 21:38:29,063:1(0x7f4f78d0d700):ZOO_INFO#log_env#747: Client environment:user.name=(null)
2017-08-11 21:38:29,063:1(0x7f4f78d0d700):ZOO_INFO#log_env#755: Client environment:user.home=/root
2017-08-11 21:38:29,063:1(0x7f4f78d0d700):ZOO_INFO#log_env#767: Client environment:user.dir=/
2017-08-11 21:38:29,063:1(0x7f4f78d0d700):ZOO_INFO#zookeeper_init#800: Initiating client connection, host=localhost:2181 sessionTimeout=10000 watcher=0x7f4f82265e50 sessionId=0 sessionPasswd=<null> context=0x7f4f5c000930 flags=0
2017-08-11 21:38:29,064:1(0x7f4f74ccb700):ZOO_INFO#check_events#1728: initiated connection to server [127.0.0.1:2181]
2017-08-11 21:38:29,067:1(0x7f4f74ccb700):ZOO_INFO#check_events#1775: session establishment complete on server [127.0.0.1:2181], sessionId=0x15dc8b48c6d0155, negotiated timeout=10000
Failed to perform recovery: Failed to run 'docker -H unix:///var/run/docker.sock ps -a': exited with status 1; stderr='Error response from daemon: client is newer than server (client API version: 1.24, server API version: 1.22)
'
To remedy this do as follows:
Step 1: rm -f /var/lib/mesos/slave-1/meta/slaves/latest
This ensures agent doesn't recover old live executors.
The below command returns same version for docker client API and docker server API. Not sure what is wrong with the setup.
docker -H unix:///var/run/docker.sock version
Client:
Version: 1.10.1
API version: 1.22
Go version: go1.5.3
Git commit: 9e83765
Built: Thu Feb 11 19:18:46 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.1
API version: 1.22
Go version: go1.5.3
Git commit: 9e83765
Built: Thu Feb 11 19:18:46 2016
OS/Arch: linux/amd64
Meoss slave was using the client version 1.24.
This is working after setting the environment variable for the mesos slave.
DOCKER_API_VERSION = 1.22
The combination of the release version and API version of Docker is as follows:
https://docs.docker.com/engine/api/v1.26/#section/Versioning
The other option is to update the docker version.

docker-proxy - Error starting userland proxy while trying to bind on 443

I'm trying to install discourse with docker in an Ubuntu 16.04 LTS with Apache listening to port 80 and 443.
When I try to lunch the app I get the following error:
starting up existing container
+ /usr/bin/docker start app Error response from daemon: driver failed programming external connectivity on endpoint app
(dade361e77fbf29f4d9667febe57a06f168f916148e10cc1365093d8f97026bb):
Error starting userland proxy: listen tcp 0.0.0.0:443: listen: address
already in use Error: failed to start containers: app
For what I'v found docker-proxy is the one that is trying to bind on 443.
How can I solve this?
Some details...
docker version
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 22:00:43 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 22:00:43 2016
OS/Arch: linux/amd64
docker info
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 4
Server Version: 1.11.2
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 25
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 4.4.0-28-generic
Operating System: Ubuntu 16.04 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 31.39 GiB
Name: sd-12345
ID: 6OLH:SAG5:VWTW:BL7U:6QYH:4BBS:QHBN:37MY:DLXA:W64E:4EVZ:WBAK
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
perhaps, stop apache? – vitr Jul 22 '16 at 2:56
^^^ This comment from vitr should be the Accepted Answer:
Docker cannot proxy a service from within a container to the port on the host without first stopping any services that are already using that port.
In this case, Apache must be stopped with a command such as sudo service apache2 stop.
Then docker start app can then be run and docker should do its thing unhindered.
See the related question: docker run -> name is already in use by container
Edit /etc/docker/daemon.json and add:
{
"userland-proxy": false
}

Resources