I want a service to be replicated.
This service is replicated on the Worker role, in some cases the service is replicated in the same node twice instead one replication for each node.
I have in my docker-compose.yml
version: "3"
services:
api-test:
# replace username/repo:tag with your name and image details
image: some-image
deploy:
replicas: 2
placement:
constraints:
- node.role == worker
restart_policy:
condition: on-failure
ports:
- "4001:80"
networks:
- some-network
networks:
some-network:
With the HA scheduling strategy introduced in 1.13 (see this PR), this behavior should not be possible from the swarm mode scheduler unless the other nodes are unavailable for scheduling. A node may not be available for scheduling if you have defined a constraint that excludes it, or you have defined resource reservations (CPU and memory) that are not available on the node. I'm not seeing either of those in the compose file you've provided.
One potential issue is the restart policy. This defines a restart policy on individual containers, while you also have swarm mode redeploying containers after an outage. The result could be too many running replicas. Therefore I recommend removing the restart_policy section from your service and letting swarm mode handle the scheduling alone.
Otherwise, the main reason for multiple containers on a node that I've seen is restarting nodes in the cluster. Swarm mode will reschedule services on the still running nodes (or first nodes to restart) and it will not preemptively stop a running task to schedule it on another node once other nodes reconnect. You can force a service to rebalance with the command:
docker service update --force $service_name
That is actually normal behavior for Docker Swarm. You see Swarm chooses nodes to deploy a service based on many criteria (load on each worker, availability, etc.)
So if you wanna make sure a service is replicated on each node with a specific label/role (e.g worker) you should use global mode instead of replicas (see here).
That way your compose file would look like this:
version: "3"
services:
api-test:
# replace username/repo:tag with your name and image details
image: some-image
deploy:
mode: global
placement:
constraints:
- node.role == worker
restart_policy:
condition: on-failure
ports:
- "4001:80"
networks:
- some-network
networks:
some-network:
This will deploy your api-test service exactly once on each worker node.
Related
I have this compose file:
version: "3.3"
services:
icecc-scheduler:
build: services/icecc-scheduler
restart: unless-stopped
network_mode: host
icecc-daemon:
build: services/icecc-daemon
restart: unless-stopped
network_mode: host
I then have a docker swarm configured with 5 machines, the one I'm on is the manager. When I deploy my stack I want the icecc-daemon container to be deployed to all nodes in the swarm while the icecc-scheduler is only deployed once (preferably to the swarm manager). Is there any way to have this level of control with docker compose/stack/swarm?
Inside docker swarm, you can achieve desired behaviour by using placement constraints.
To achieve the service is deployed only to the manager node the constraint should be: - "node.role==manager"
To achieve the service is only deployed once you can refer to the
deploy:
mode: replicated
replicas: 1
section. This will make your service run on one replica only inside the whole swarm cluster.
To achieve service is deployed exactly as one container per swarm node, you can use:
deploy:
mode: global
More information on the parameters on official docs
I have two AWS instances:
production-01
docker-machine-master
I ssh into docker-machine-master and run docker stack deploy -c deploy/docker-compose.yml --with-registry-auth production and i get this error:
this node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again
My guess is the swarm manager went down at some point and this new instance spun up some how keeping the same information/configuration minus the swarm manager info. Maybe the internal IP changed or something. I'm making that guess because the launch times are different by months. The production-01 instance was launched 6 months earlier. I wouldn't know because I am new to AWS, Docker, & this project.
I want to deploy code changes to the production-01 instance but I don't have ssh keys to do so. Also, my hunch is that production-01 is a replica noted in the docker-compose.yml file.
I'm the only dev on this project so any help would be much appreciated.
Here's a copy of my docker-compose.yml file with names changed.
version: '3'
services:
database:
image: postgres:10
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
deploy:
replicas: 1
volumes:
- db:/var/lib/postgresql/data
aservicename:
image: 123.456.abc.amazonaws.com/reponame
ports:
- 80:80
depends_on:
- database
environment:
DB_HOST: database
DATA_IMPORT_BUCKET: some_sql_bucket
FQDN: somedomain.com
DJANGO_SETTINGS_MODULE: name.settings.production
DEBUG: "true"
deploy:
mode: global
logging:
driver: awslogs
options:
awslogs-group: aservicename
cron:
image: 123.456.abc.amazonaws.com/reponame
depends_on:
- database
environment:
DB_HOST: database
DATA_IMPORT_BUCKET: some_sql_bucket
FQDN: somedomain.com
DOCKER_SETTINGS_MODULE: name.settings.production
deploy:
replicas: 1
command: /name/deploy/someshellfile.sh
logging:
driver: awslogs
options:
awslogs-group: cron
networks:
default:
driver: overlay
ipam:
driver: default
config:
- subnet: 192.168.100.0/24
volumes:
db:
driver: rexray/ebs
I'll assume you only have the one manager, and the production-01 is a worker.
If docker info shows Swarm: inactive and you don't have backups of the Swarm raft log, then you'll need to create a new swarm with docker swarm init.
Be sure it has the rexray/ebs driver by checking docker plugin ls. All nodes will need that plugin driver to use the db volume.
If you can't SSH to production-01 then there will be no way to have it leave and join the new swarm. You'd need to deploy a new worker node and shutdown that existing server.
Then you can docker stack deploy that app again and it should reconnect the db volume.
Note 1: Don't redeploy the stack on new servers if it's still running on the production-01 worker, as it would fail because the ebs volume for db will still be connected to production-01.
Note 2: It's best in anything beyond learning, you run three managers (managers are also workers by default). That way if one node gets killed, you still have a working solution.
I have just started using docker . I was able to create a docker compose file which deploys three components of my application ,with the necessary number of replications in one host .
I want to replicate the same same thing ,with multiple hosts now .
I have three processes A[7 copies ] ,B [ 1 copy] ,C [1 Copy]
I followed the creating swarm tutorial on the docker website ,and managed to create a manager and attach two workers to it .
So now when I run my command
docker stack deploy --compose-file docker-compose.yml perf
It does spawn the required number of machines ,but all of them in the manager itself .
I would ideally want them to spawn C and B in the manager and ann the copies of A distributed between worker 1 and worker 2.
Here is my docker -compose file
version: '3'
services:
A:
image: A:host
tty: true
volumes:
- LogFilesLocationFolder:/jmeter/log
- AntOutLogFolder:/antout
- ZipFilesLocationFolder:/zip
deploy:
replicas: 7
placement:
constraints: [node.role == worker]
networks:
- perfhost
B:
container_name: s1_perfSqlDB
restart: always
tty: true
image: mysql:5.5
environment:
MYSQL_ROOT_PASSWORD: ''
volumes:
- mysql:/var/lib/mysql
ports:
- "3306:3306"
deploy:
placement:
constraints: [node.role == manager]
networks:
- perfhost
C:
container_name: s1_scheduler
image: C:host
tty: true
volumes:
- LogFilesLocationFolder:/log
- ZipFilesLocationFolder:/zip
- AntOutLogFolder:/antout
networks:
- perfhost
deploy:
placement:
constraints: [node.role == manager]
ports:
- "7000:7000"
networks:
perfhost:
volumes:
mysql:
LogFilesLocationFolder:
ZipFilesLocationFolder:
AntOutLogFolder:
B) And if I do get this working ,how do I use volumes to transfer data between Conatiner for Service A and container for Service B ,given that they are on different host machines
A few tips and answers:
for service names I don't recommend capital letters. Use valid DNS hostnames (lowercase, no special char except -).
container_name isn't supported in swarm and shouldn't be needed. Looks like C: should be something like scheduler, etc. Make the service names simple so they are easy to use/remember on their virtual network.
All services in a single compose file are always on the same docker network in swarm (and docker-compose for local development), so no need for the network assignment or listing.
restart:always isn't needed in swarm. That setting isn't used and is the default anyways. If you're using it for docker-compose, it's rarely needed as you usually don't want apps in a respawn loop during errors which will usually result in CPU race condition. I recommend leaving it off.
Volumes use a "volume driver". The default is local, just like normal docker commands. If you have shared storage you can use a volume driver plugin from store.docker.com to ensure shared storage is connected to the correct node.
If you're still having issues with worker/manager task assignment, put the output of docker node ls and maybe docker service ls and docker node ps <managername> for us to help troubleshoot.
First you should run
docker node ls
And check if all of your nodes are available. If they are, you should check if the workers have the images they need to run the containers.
I would also try with a constraint using the id of each node instead, you can see the ids with the previous command.
Run before docker stack deploy:
mkdir /srv/service/public
docker run --rm -v /srv/service/public:/srv/service/public my-container-with-data cp -R /var/www/app/public /srv/service/public
Use direcory /srv/service/public as volume in containers.
I want to use docker-compose with Docker Swarm (I use docker version 1.13 and compose with version: '3' syntax).
Is each service reachable as a "single" service to the other services? Here is an simplified example to be clear:
version: '3'
services:
nodejs:
image: mynodeapp
container_name: my_app
ports:
- "80:8080"
environment:
- REDIS_HOST=my_redis
- REDIS_PORT=6379
deploy:
mode: replicated
replicas: 3
networks:
- my_net
command: npm start
redis:
image: redis
container_name: my_redis
restart: always
expose:
- 6379
deploy:
mode: replicated
replicas: 2
networks:
- my_net
networks:
my_net:
external: true
Let's say I have 3 VMs which are configured as a swarm. So there is one nodejs container running on each VM but there are only two redis container.
On the VM where no redis is running: Will my nodejs container know about the redis?
Addiitonal questions:
When I set replicas: 4 for my redis, I will have two redis container on one VM: Will this be a problem for my nodejs app?
Last question:
When I set replicas: 4 for my nodeapp: Will this even work because I now have exposed two times port 80?
The services have to be stateless. In the case of databases it is necessary to set the cluster mode in each instance, since they are statefull.
In the same order you asked:
One service does not see another service as if it is made of replicas. Nodejs will see a unique Redis, which will have one IP, no matter in which node its replicas are located. That's the beauty of Swarm.
Yes, you can have Nodejs in one node and Redis in another node and they will be visible to each other. That's what the manager does; make the containers "believe" they are running on the same machine.
Also, you can have many replicas in the same node without a problem; they will be perceived as a whole. In fact, they use the same volume.
And last, as implication of (1), there will be no problem because you are not actually exposing port 80 twice. Even having 20 replicas, you have a unique entrypoint to your service, a particular IP:PORT direction.
I have a docker-compose file that deploys 8 different docker services in the same host. Is it possible to deploy it in different hosts?, I would like to deploy some service in one hosts and another ones in other host remotelly. Would I need to use docker-swarm? or is an easier way to do?
I have read that it could be done using DOCKER_HOST, but if I configure /etc/default/docker with this variable, all the services would be run on the remote host, and what I need is some services in one remote host, and other services in other remote host.
We can do this with docker compose v3 now.
https://docs.docker.com/engine/swarm/#feature-highlights
https://docs.docker.com/compose/compose-file/
You have to initialize the swarm cluster using command
$ docker swarm init
You can add more nodes as worker or manager -
https://docs.docker.com/engine/swarm/join-nodes/
Once you have your both nodes added to the cluster, pass your compose v3 i.e deployment file to create a stack. Compose file should just contain predefined images, you can't give a Dockerfile for deployment in Swarm mode.
$ docker stack deploy -c dev-compose-deploy.yml --with-registry-auth PL
View your stack services status -
$ docker stack services PL
Try to use Labels & Placement constraints to put services on different nodes.
Example "dev-compose-deploy.yml" file for your reference -
version: "3"
services:
nginx:
image: nexus.example.com/pl/nginx-dev:latest
extra_hosts:
- "dev-pldocker-01:10.2.0.42”
- "int-pldocker-01:10.2.100.62”
- "prd-plwebassets-01:10.2.0.62”
ports:
- "80:8003"
- "443:443"
volumes:
- logs:/app/out/
networks:
- pl
deploy:
replicas: 3
labels:
feature.description: “Frontend”
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: any
placement:
constraints: [node.role == worker]
command: "/usr/sbin/nginx"
viz:
image: dockersamples/visualizer
ports:
- "8085:8080"
networks:
- pl
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
deploy:
replicas: 1
labels:
feature.description: "Visualizer"
restart_policy:
condition: any
placement:
constraints: [node.role == manager]
networks:
pl:
volumes:
logs:
With docker swarm mode, you can deploy a version 3 compose yml file using:
docker stack deploy -c docker-compose.yml $your_stack_name
The v3 syntax removes a few features that do not apply to swarm mode, like links and dependencies. You should also note that volumes are stored local to a node by default. Otherwise the v3 syntax is very similar to the v2 syntax you may already be using. See ether following for more details:
https://docs.docker.com/compose/compose-file/
https://docs.docker.com/engine/swarm/
[ Original answer before v3 of the docker-compose.yml ]
For a single docker-compose.yml to deploy to multiple hosts, you need to use the standalone swarm (not the newer swarm mode, yet, this is rapidly changing). Spin up a swarm manager that has each host defined as members of its swarm, and then you can use constraints inside your docker-compose.yml to define which services run on which hosts.
You can also split up your docker-compose.yml into several files, one for each host, and then run multiple docker-compose up commands, with a different DOCKER_HOST value defined for each.
In both cases, you'll need to configure your docker installs to listen on the network, which should be done by configuring TLS on those sockets. This documentation describes what you need to do for that.
You can use docker compose version 3 which provides ability to do multi host deployment without using multiple compose files. All you need is to define labels for each node in the cluster and use the label name for placement constraint.
You may also want to consider the Overnode tool - it is container orchestration tool on top of automated multi-host docker-compose. It is the easiest transition from the single-host docker-compose deployment. (Disclaimer: I am an author and it was published recently)