Is there a way to control the distribution of services across different computers? I have one master with two workers and 5 services:
web server
database
redis
celery
s3 storage connection
I only want to outsource the celery workers and run everything else on the master. Is there a way to control that with docker swarm? I have not created a registry yet, because I am not sure if that is still necessary.
Here is my current experimental docker-compose file.
version: "3.8"
volumes:
s3data:
driver: local
services:
web:
image: localhost:5000/web
build: .
env_file:
- ./.env
environment:
- ENVIRONMENT=develop
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./app/:/app/
- ./lib/lrg_omics/:/lrg-omics/
- s3data:/datalake/
- /data/media/:/appmedia/
- /data/static/:/static/
ports:
- "8000:8000"
depends_on:
- db
- redis
- s3vol
links:
- redis:redis
restart: always
db:
image: postgres
volumes:
- /data/db/:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
redis:
restart: always
image: redis:alpine
ports:
- "6379:6379"
celery:
restart: on-failure
image: pp-celery-worker
build:
context: .
dockerfile: Dockerfile
command: bash -c "celery -A main worker -l info --concurrency 8"
env_file:
- ./.env
volumes:
- ./app/:/app/
- ./lib/lrg_omics/:/lrg-omics/
- s3data:/datalake/
environment:
- DB_HOST=db
- DB_NAME=app
- DB_USER=postgres
- DB_PASS=postgres
depends_on:
- db
- redis
- web
- s3vol
deploy:
replicas: 2
placement:
max_replicas_per_node: 1
s3vol:
image: elementar/s3-volume
command: /data s3://PQC
environment:
- BACKUP_INTERVAL=2
- AWS_ACCESS_KEY_ID=...
- AWS_SECRET_ACCESS_KEY=...
- ENDPOINT_URL=https://example.com
volumes:
- s3data:/data
When I deploy this with sudo docker stack deploy --compose-file docker-compose-distributed.yml QC
And then look at the services I get something like this:
sudo docker stack services QC
>>>
ID NAME MODE REPLICAS IMAGE PORTS
xx5hkbswipoz QC_celery replicated 0/2 (max 1 per node) celery-worker:latest
natb3trv9ngi QC_db replicated 0/1 postgres:latest
1bxpkb18ojay QC_redis replicated 1/1 redis:alpine *:6379->6379/tcp
6rsl5gfpd0oa QC_s3vol replicated 1/1 elementar/s3-volume:latest
aszkle6msmqr QC_web replicated 0/1 localhost:5000/web:latest *:8000->8000/tcp
For some reason only redis and the S3 containers run. And both of them on the master. Nothing runs on the workers.
I am quite new to docker swarm so there is probably more than one thing wrong here. Any comments on best practices are welcome.
To determine why the services are not starting
docker service ps QC_celery --no-trunc will show the state of the service and a message from docker.
To control placement consult the Compose file version 3 reference on placement constraints. Basically it entails adding to the deploy: node:
deploy:
replicas: 2
placement:
max_replicas_per_node: 1
constraints:
- node.role==worker
While, nominally, compose.yml and stack.yml files share a format, they support different feature subsets and for complex deployments it becomes helpful to split the deployment into discreet compose.yml files for docker compose and stack.yml files for swarm deployments.
docker stack deploy -c docker-compose.yml -c docker-stack.yml QC can merge a docker-compose.yml base file with stack specific settings, and you can keep docker compose artifacts in your docker-compose.override.yml. these artifacts include:
build: - docker swarm needs the image to be built and available in a registry, either local(swarm hosted?) or docker-hub.
depends_on:, links: - not supported by swarm, which assumes services can be restarted at any time, and will find each other using docker networks.
restart: controlled by restart_policy: under deploy:
Related
My target container contains NGINX logs which I wanted to collect from Elastic Fleet's NGINX Integration.
I followed every step, even successfully hosting the fleet server and the agent in two separate containers, what confuses me, is how can I configure my Agent which has the NGINX integration setup on its policy, to collect logs from the service container?
I have mostly encountered examples using the elastic-agent as a package installer directly on the target container.
I've attached three snippets of my docker-compose setup, that I follow for the Fleet, Agent and App containers.
FLEET SERVER
fleet:
image: docker.elastic.co/beats/elastic-agent:$ELASTIC_VERSION
healthcheck:
test: "curl -f http://127.0.0.1:8220/api/status | grep HEALTHY 2>&1 >/dev/null"
retries: 12
interval: 5s
hostname: fleet
container_name: fleet
restart: always
user: root
environment:
- FLEET_SERVER_ENABLE=1
- "FLEET_SERVER_ELASTICSEARCH_HOST=https://elasticsearch:9200"
- FLEET_SERVER_ELASTICSEARCH_USERNAME=elastic
- FLEET_SERVER_ELASTICSEARCH_PASSWORD=REPLACE1
- FLEET_SERVER_ELASTICSEARCH_CA=$CERTS_DIR/ca/ca.crt
- FLEET_SERVER_INSECURE_HTTP=1
- KIBANA_FLEET_SETUP=1
- "KIBANA_FLEET_HOST=https://kibana:5601"
- KIBANA_FLEET_USERNAME=elastic
- KIBANA_FLEET_PASSWORD=REPLACE1
- KIBANA_FLEET_CA=$CERTS_DIR/ca/ca.crt
- FLEET_ENROLL=1
ports:
- 8220:8220
networks:
- elastic
volumes:
- certs:$CERTS_DIR
Elastic Agent
agent:
image: docker.elastic.co/beats/elastic-agent:$ELASTIC_VERSION
container_name: agent
hostname: agent
restart: always
user: root
healthcheck:
test: "elastic-agent status"
retries: 90
interval: 1s
environment:
- FLEET_ENROLLMENT_TOKEN=REPLACE2
- FLEET_ENROLL=1
- FLEET_URL=http://fleet:8220
- FLEET_INSECURE=1
- ELASTICSEARCH_HOSTS='["https://elasticsearch:9200"]'
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_PASSWORD=REPLACE1
- ELASTICSEARCH_CA=$CERTS_DIR/ca/ca.crt
- "STATE_PATH=/usr/share/elastic-agent"
networks:
- elastic
volumes:
- certs:$CERTS_DIR
App Container (NGINX logs)
demo-app:
image: ubuntu:bionic
container_name: demo-app
build:
context: ./docker/
dockerfile: Dockerfile
volumes:
- ./app:/var/www/html/app
- ./docker/nginx.conf:/etc/nginx/nginx.conf
ports:
- target: 90
published: 9090
protocol: tcp
mode: host
networks:
- elastic
The ELK stack currently runs on version 7.17.0.
If anyone could provide any info on what next needs to be done , It'll be very much helpful, thanks!
you could share nginx log files through volume mount.
mount a directory to nginx log directory, and mount that to a directory in your elastic agent container. then youre good to harvest the nginx log in elastic agent container from there.
there might be directory read write permission problem, feel free to ask below.
kinda like:
nginx compose:
demo-app:
...
volumes:
- ./app:/var/www/html/app
- ./docker/nginx.conf:/etc/nginx/nginx.conf
+ - /home/user/nginx-log:/var/log/nginx/access.log
...
elastic agent compose:
services:
agent:
...
volumes:
- certs:$CERTS_DIR
+ - /home/user/nginx-log:/usr/share/elastic-agent/nginx-log
I want to deploy HA Postgresql with Failover Patroni and HAProxy (like single entrypoint) in docker swarm.
I have docker-compose.yml -
version: "3.7"
services:
etcd1:
image: patroni
networks:
- test
env_file:
- docker/etcd.env
container_name: test-etcd1
hostname: etcd1
command: etcd -name etcd1 -initial-advertise-peer-urls http://etcd1:2380
etcd2:
image: patroni
networks:
- test
env_file:
- docker/etcd.env
container_name: test-etcd2
hostname: etcd2
command: etcd -name etcd2 -initial-advertise-peer-urls http://etcd2:2380
etcd3:
image: patroni
networks:
- test
env_file:
- docker/etcd.env
container_name: test-etcd3
hostname: etcd3
command: etcd -name etcd3 -initial-advertise-peer-urls http://etcd3:2380
patroni1:
image: patroni
networks:
- test
env_file:
- docker/patroni.env
hostname: patroni1
container_name: test-patroni1
environment:
PATRONI_NAME: patroni1
deploy:
placement:
constraints: [node.role == manager]
# - node.labels.type == primary
# - node.role == manager
patroni2:
image: patroni
networks:
- test
env_file:
- docker/patroni.env
hostname: patroni2
container_name: test-patroni2
environment:
PATRONI_NAME: patroni2
deploy:
placement:
constraints: [node.role == worker]
# - node.labels.type != primary
# - node.role == worker
patroni3:
image: patroni
networks:
- test
env_file:
- docker/patroni.env
hostname: patroni3
container_name: test-patroni3
environment:
PATRONI_NAME: patroni3
deploy:
placement:
constraints: [node.role == worker]
# - node.labels.type != primary
# - node.role == worker
haproxy:
image: patroni
networks:
- test
env_file:
- docker/patroni.env
hostname: haproxy
container_name: test-haproxy
ports:
- "5000:5000"
- "5001:5001"
command: haproxy
networks:
test:
driver: overlay
attachable: true
And deploy this services in docker swarm with this command:
docker stack deploy --compose-file docker-compose.yml test
When i use this command, my services is creating, but service patroni2 and patroni3 don't start on other nodes, which roles are worker. They don't start at all!
I want to see my services deploy on all nodes (3 - one manager and two workers) which existing in docker swarm
But if i delete constraints, all my services start on one node, when i deploy docker-compose.yml in swarm.
May be this services can't see my network, though i deploy it using docker official documentation.
With different service names, docker will not attempt to spread containers across multiple nodes, and will fall back to the least used node that satisfies the requirements, where least used is measured by the number of scheduled containers.
You could attempt to solve this by using the same service name and 3 replicas. This would require that they be defined identically. To make this work, you can leverage a few features, the first being that etcd.tasks will resolve to the individual ip addresses of each etcd service container. And the second are service templates which can be used to inject values like {{.Task.Slot}} into the settings for hostname, volume mounts, and env variables. The challenge is the list at the end will likely not give you what you want, which is a way to uniquely address each replica from the other replicas. Hostname seems like it would work, but it unfortunately does not resolve in docker's DNS implementation (and wouldn't be easy to implement since it's possible to create a container with the capabilities to change the hostname after docker has deployed it).
The option you are left with is configuring constraints on each service to run on specific nodes. That's less than ideal, and reduces the fault tolerance of these services. If you have lots of nodes that can be separated into 3 groups then using node labels would solve the issue.
docker-compose.yml
version: "3"
services:
daggr:
image: "docker.pvt.com/test/daggr:stable"
hostname: '{{.Node.Hostname}}'
deploy:
mode: global
resources:
limits:
cpus: "2"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
networks:
- webnet
networks:
webnet:
Right now, I am deploying docker containers with the below command:
docker stack deploy -c docker-compose.yml daggrstack
Is there way to specify in docker-compose.yml file that if the image, docker.pvt.com/test/daggr:stable, is updated (ie. docker build, docker tag, and docker push :stable), then the running containers are automatically re-deployed with the updated image?
So, i dont have to re-run docker stack deploy every time i pushed a new docker image
Is there way to specify in docker-compose.yml file that if the image, docker.pvt.com/test/daggr:stable,
is updated (ie. docker build, docker tag, and docker push :stable), then the running containers are automatically re-deployed with the updated image?
The answer is No. Docker swarm does not auto-update the service when a new image is available. This should handled as part a continuous deployment system.
However, Docker does make it easy to update the images of already running services.
As described in Apply rolling updates to a service, you can update the image of a services, via:
docker service update --image docker.pvt.com/test/daggr:stable daggr
I am trying to deploy my working docker-compose set up to a docker-swarm, everything seems ok, except that the only service that got replicate and generate a running container is the redis one, the 3 others got stuck and never generate any running container, they don't even download their respective images.
I can't find any debug feature, all the logs are empty, I'm completely helpless.
Let me show you the current state of my installation.
docker node ls print =>
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
oapl4et92vjp6mv67h2vw8boq boot2docker Ready Active
x2fal9iwj6aqt1vgobyp1smv1 * manager1 Ready Active Leader
lmtuojednaiqmfmm2izl7npf0 worker1 Ready Active
The docker compose =>
version: '3'
services:
mongo:
image: mongo
container_name: mongo
restart: always
volumes:
- /data/db:/data/db
deploy:
placement:
constraints: [node.role == manager]
ports:
- "27017:27017"
redis:
image: redis
container_name: redis
restart: always
bnbkeeper:
image: registry.example.tld/keepers:0.10
container_name: bnbkeeper
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
depends_on:
- mongo
- redis
ports:
- "8080:8080"
links:
- mongo
- redis
environment:
- REDIS_HOST=redis
- MONGO_HOST=mongo
bnbkeeper-ws:
image: registry.example.tld/keepers:0.10
container_name: bnbkeeper-ws
restart: unless-stopped
depends_on:
- mongo
- redis
ports:
- "3800:3800"
links:
- mongo
- redis
environment:
- REDIS_HOST=redis
command: npm run start:subscription
The current state of my services
ID NAME MODE REPLICAS IMAGE PORTS
tbwfswsxx23f stack_bnbkeeper replicated 0/5 registry.example.tld/keepers:0.10
khrqtx28qoia stack_bnbkeeper-ws replicated 0/1 registry.example.tld/keepers:0.10
lipa8nvncpxb stack_mongo replicated 0/1 mongo:latest
xtz2411htcg7 stack_redis replicated 1/1 redis:latest
My redis successful service (docker service ps stack_redis)
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
cqv0njcgsw6f stack_redis.1 redis:latest worker1 Running Running 25 minutes ago
my mongo unsuccessful service (docker service ps stack_mongo)
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
yipokxxiftqq stack_mongo.1 mongo:latest Running New 25 minutes ago
I'm completely new to docker swarm, and probably made a silly mistake here, but I couldn't find much documentation on how to setup such a simple stack.
To monitor, try this:
journalctl -f -n10
Then run the docker stack deploy command in a separate session and see what it shows
try removing port publish and add --endpoint-mode dnsrr to your service.
I'm using docker 1.12.1
I have an easy docker-compose script.
version: '2'
services:
jenkins-slave:
build: ./slave
image: jenkins-slave:1.0
restart: always
ports:
- "22"
environment:
- "constraint:NODE==master1"
jenkins-master:
image: jenkins:2.7.1
container_name: jenkins-master
restart: always
ports:
- "8080:8080"
- "50000"
environment:
- "constraint:NODE==node1"
I run this script with docker-compose -p jenkins up -d.
This Creates my 2 containers but only on my master (from where I execute my command). I would expect that one would be created on the master and one on the node.
I also tried to add
networks:
jenkins_swarm:
driver: overlay
and
networks:
- jenkins_swarm
After every service but this is failing with:
Cannot create container for service jenkins-master: network jenkins_jenkins_swarm not found
While the network is created when I perform docker network ls
Someone who can help me to deploy 2 containers on my 2 nodes with docker-compose. Swarm is defenitly working on my "cluster". I followed this tutorial to verify.
Compose doesn't support Swarm Mode at the moment.
When you run docker compose up on the master node, Compose issues docker run commands for the services in the Compose file, rather than docker service create - which is why the containers all run on the master. See this answer for options.
On the second point, networks are scoped in 1.12. If you inspect your network you'll find it's been created at swarm-level, but Compose is running engine-level containers which can't see the swarm network.
We can do this with docker compose v3 now.
https://docs.docker.com/engine/swarm/#feature-highlights
https://docs.docker.com/compose/compose-file/
You have to initialize the swarm cluster using command
$ docker swarm init
You can add more nodes as worker or manager -
https://docs.docker.com/engine/swarm/join-nodes/
Once you have your both nodes added to the cluster, pass your compose v3 i.e deployment file to create a stack. Compose file should just contain predefined images, you can't give a Dockerfile for deployment in Swarm mode.
$ docker stack deploy -c dev-compose-deploy.yml --with-registry-auth PL
View your stack services status -
$ docker stack services PL
Try to use Labels & Placement constraints to put services on different nodes.
Example "dev-compose-deploy.yml" file for your reference
version: "3"
services:
nginx:
image: nexus.example.com/pl/nginx-dev:latest
extra_hosts:
- "dev-pldocker-01:10.2.0.42”
- "int-pldocker-01:10.2.100.62”
- "prd-plwebassets-01:10.2.0.62”
ports:
- "80:8003"
- "443:443"
volumes:
- logs:/app/out/
networks:
- pl
deploy:
replicas: 3
labels:
feature.description: “Frontend”
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: any
placement:
constraints: [node.role == worker]
command: "/usr/sbin/nginx"
viz:
image: dockersamples/visualizer
ports:
- "8085:8080"
networks:
- pl
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
deploy:
replicas: 1
labels:
feature.description: "Visualizer"
restart_policy:
condition: any
placement:
constraints: [node.role == manager]
networks:
pl:
volumes:
logs: