Redis cluster with Docker Swarm mode - docker

I am a newbie to Docker swarm. I was trying to deploy redis cluster on Docker swarm with compose file. I want the redis cluster use port 6380 so I configured the port and made it mount the redis configure file in compose file.
But when I ran docker stack deploy --compose-file docker-compose.yml node, I got an erro states that "Sorry, the cluster configuration file redis-node.conf is already used by a different Redis Cluster node. Please make sure that different nodes use different cluster configuration files."
Here is my docker-compose.yml
version: "3"
services:
redis-node:
image: redis:3.2
ports:
- 6380
deploy:
replicas: 6
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
volumes:
- /var/docker/redis/node:/data
command:
redis-server --port 6380 --logfile redis-node.log --appendonly yes --appendfilename redis-node.aof --appendfsync everysec --cluster-enabled yes --cluster-config-file redis-node.conf
restart: always
How can I deploy redis cluster mounted with redis.conf in Docker swarm mode?

From the redis cluster docs:
cluster-config-file : Note that despite the name of this option, this is not an user editable configuration file, but the file where a Redis Cluster node automatically persists the cluster configuration (the state, basically) every time there is a change, in order to be able to re-read it at startup.
Are you sharing the volume where this is saved across the cluster? That would seem to be the problem.

Related

Limit deployment of certain services with docker swarm/compose

I have this compose file:
version: "3.3"
services:
icecc-scheduler:
build: services/icecc-scheduler
restart: unless-stopped
network_mode: host
icecc-daemon:
build: services/icecc-daemon
restart: unless-stopped
network_mode: host
I then have a docker swarm configured with 5 machines, the one I'm on is the manager. When I deploy my stack I want the icecc-daemon container to be deployed to all nodes in the swarm while the icecc-scheduler is only deployed once (preferably to the swarm manager). Is there any way to have this level of control with docker compose/stack/swarm?
Inside docker swarm, you can achieve desired behaviour by using placement constraints.
To achieve the service is deployed only to the manager node the constraint should be: - "node.role==manager"
To achieve the service is only deployed once you can refer to the
deploy:
mode: replicated
replicas: 1
section. This will make your service run on one replica only inside the whole swarm cluster.
To achieve service is deployed exactly as one container per swarm node, you can use:
deploy:
mode: global
More information on the parameters on official docs

Disable external node service accessibility in docker swarm

I have a docker swarm with 2 nodes and each node run 2 services in global mode so each node have 2 services running inside it. My problem is how to force ubuntu service in node1 only connect to mysql service in node1 and dont use round-robin method to select mysql service.
so when I connect to mysql from ubuntu in node1 with mysql -hmysql -uroot -p it select only mysql in node1.
here is the docker-compose file which describes my case
version: '3.8'
services:
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
networks:
app-net: {}
deploy:
mode: global
ubuntu:
entrypoint: tail -f /dev/null
deploy:
mode: global
image: ubuntu:20.04
networks:
app-net: {}
networks:
app-net: {}
with this docker-compose file inside ubuntu container when I try to connect to mysql it selects mysql service in both nodes with round-robin algorithm.
What I try to achieve is to force each service be only visible to the services inside the same node.
I can't think of an easy way to achieve what you want in swarm with an overlay network. However, you can use unix socket instead of network. Just create a volume, mount it both into MySQL and your application, then make MySQL to put its socket onto that volume. Docker will create a volume on each node and thus you'll have your communication closed within node.
If you insist on using network communications, you can mount node's Docker socket into your app container and use it to find name of the container running MySQL on that node. Once you got the name, you can use it to connect to the particular instance of the service. Now, not only it is hard to make, it is also an anti-pattern and a security threat, so I don't recommend you to implement this idea.
At last there is also Kubernetes, where containers inside a pod can communicate with each other via localhost but I think you won't go that far, will you?
You should have a look mode=host.
You can bypass the routing mesh, so that when you access the bound port on a given node, you are always accessing the instance of the service running on that node. This is referred to as host mode. There are a few things to keep in mind.
ports:
- target: 80
published : 8080
protocol: tcp
mode: host
Unless I'm missing something, I would say you should not use global deploy and instead declare 2 ubuntu service and 2 mysql services in the compose file or deploy 2 separate stacks and in both cases use constraints to pin containers to specific node.
Example for first case would be something like this:
version: '3.8'
services:
mysql1:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
deploy:
placement:
constraints: [node.hostname == node1]
mysql2:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
deploy:
placement:
constraints: [node.hostname == node2]
ubuntu1:
entrypoint: tail -f /dev/null
image: ubuntu:20:04
deploy:
placement:
constraints: [node.hostname == node1]
ubuntu2:
entrypoint: tail -f /dev/null
image: ubuntu:20:04
deploy:
placement:
constraints: [node.hostname == node2]

How to get my machine back to swarm manager status?

I have two AWS instances:
production-01
docker-machine-master
I ssh into docker-machine-master and run docker stack deploy -c deploy/docker-compose.yml --with-registry-auth production and i get this error:
this node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again
My guess is the swarm manager went down at some point and this new instance spun up some how keeping the same information/configuration minus the swarm manager info. Maybe the internal IP changed or something. I'm making that guess because the launch times are different by months. The production-01 instance was launched 6 months earlier. I wouldn't know because I am new to AWS, Docker, & this project.
I want to deploy code changes to the production-01 instance but I don't have ssh keys to do so. Also, my hunch is that production-01 is a replica noted in the docker-compose.yml file.
I'm the only dev on this project so any help would be much appreciated.
Here's a copy of my docker-compose.yml file with names changed.
version: '3'
services:
database:
image: postgres:10
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
deploy:
replicas: 1
volumes:
- db:/var/lib/postgresql/data
aservicename:
image: 123.456.abc.amazonaws.com/reponame
ports:
- 80:80
depends_on:
- database
environment:
DB_HOST: database
DATA_IMPORT_BUCKET: some_sql_bucket
FQDN: somedomain.com
DJANGO_SETTINGS_MODULE: name.settings.production
DEBUG: "true"
deploy:
mode: global
logging:
driver: awslogs
options:
awslogs-group: aservicename
cron:
image: 123.456.abc.amazonaws.com/reponame
depends_on:
- database
environment:
DB_HOST: database
DATA_IMPORT_BUCKET: some_sql_bucket
FQDN: somedomain.com
DOCKER_SETTINGS_MODULE: name.settings.production
deploy:
replicas: 1
command: /name/deploy/someshellfile.sh
logging:
driver: awslogs
options:
awslogs-group: cron
networks:
default:
driver: overlay
ipam:
driver: default
config:
- subnet: 192.168.100.0/24
volumes:
db:
driver: rexray/ebs
I'll assume you only have the one manager, and the production-01 is a worker.
If docker info shows Swarm: inactive and you don't have backups of the Swarm raft log, then you'll need to create a new swarm with docker swarm init.
Be sure it has the rexray/ebs driver by checking docker plugin ls. All nodes will need that plugin driver to use the db volume.
If you can't SSH to production-01 then there will be no way to have it leave and join the new swarm. You'd need to deploy a new worker node and shutdown that existing server.
Then you can docker stack deploy that app again and it should reconnect the db volume.
Note 1: Don't redeploy the stack on new servers if it's still running on the production-01 worker, as it would fail because the ebs volume for db will still be connected to production-01.
Note 2: It's best in anything beyond learning, you run three managers (managers are also workers by default). That way if one node gets killed, you still have a working solution.

Docker Swarm Service Clustering

I want to use docker-compose with Docker Swarm (I use docker version 1.13 and compose with version: '3' syntax).
Is each service reachable as a "single" service to the other services? Here is an simplified example to be clear:
version: '3'
services:
nodejs:
image: mynodeapp
container_name: my_app
ports:
- "80:8080"
environment:
- REDIS_HOST=my_redis
- REDIS_PORT=6379
deploy:
mode: replicated
replicas: 3
networks:
- my_net
command: npm start
redis:
image: redis
container_name: my_redis
restart: always
expose:
- 6379
deploy:
mode: replicated
replicas: 2
networks:
- my_net
networks:
my_net:
external: true
Let's say I have 3 VMs which are configured as a swarm. So there is one nodejs container running on each VM but there are only two redis container.
On the VM where no redis is running: Will my nodejs container know about the redis?
Addiitonal questions:
When I set replicas: 4 for my redis, I will have two redis container on one VM: Will this be a problem for my nodejs app?
Last question:
When I set replicas: 4 for my nodeapp: Will this even work because I now have exposed two times port 80?
The services have to be stateless. In the case of databases it is necessary to set the cluster mode in each instance, since they are statefull.
In the same order you asked:
One service does not see another service as if it is made of replicas. Nodejs will see a unique Redis, which will have one IP, no matter in which node its replicas are located. That's the beauty of Swarm.
Yes, you can have Nodejs in one node and Redis in another node and they will be visible to each other. That's what the manager does; make the containers "believe" they are running on the same machine.
Also, you can have many replicas in the same node without a problem; they will be perceived as a whole. In fact, they use the same volume.
And last, as implication of (1), there will be no problem because you are not actually exposing port 80 twice. Even having 20 replicas, you have a unique entrypoint to your service, a particular IP:PORT direction.

Docker-compose: deploying service in multiple hosts

I have a docker-compose file that deploys 8 different docker services in the same host. Is it possible to deploy it in different hosts?, I would like to deploy some service in one hosts and another ones in other host remotelly. Would I need to use docker-swarm? or is an easier way to do?
I have read that it could be done using DOCKER_HOST, but if I configure /etc/default/docker with this variable, all the services would be run on the remote host, and what I need is some services in one remote host, and other services in other remote host.
We can do this with docker compose v3 now.
https://docs.docker.com/engine/swarm/#feature-highlights
https://docs.docker.com/compose/compose-file/
You have to initialize the swarm cluster using command
$ docker swarm init
You can add more nodes as worker or manager -
https://docs.docker.com/engine/swarm/join-nodes/
Once you have your both nodes added to the cluster, pass your compose v3 i.e deployment file to create a stack. Compose file should just contain predefined images, you can't give a Dockerfile for deployment in Swarm mode.
$ docker stack deploy -c dev-compose-deploy.yml --with-registry-auth PL
View your stack services status -
$ docker stack services PL
Try to use Labels & Placement constraints to put services on different nodes.
Example "dev-compose-deploy.yml" file for your reference -
version: "3"
services:
nginx:
image: nexus.example.com/pl/nginx-dev:latest
extra_hosts:
- "dev-pldocker-01:10.2.0.42”
- "int-pldocker-01:10.2.100.62”
- "prd-plwebassets-01:10.2.0.62”
ports:
- "80:8003"
- "443:443"
volumes:
- logs:/app/out/
networks:
- pl
deploy:
replicas: 3
labels:
feature.description: “Frontend”
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: any
placement:
constraints: [node.role == worker]
command: "/usr/sbin/nginx"
viz:
image: dockersamples/visualizer
ports:
- "8085:8080"
networks:
- pl
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
deploy:
replicas: 1
labels:
feature.description: "Visualizer"
restart_policy:
condition: any
placement:
constraints: [node.role == manager]
networks:
pl:
volumes:
logs:
With docker swarm mode, you can deploy a version 3 compose yml file using:
docker stack deploy -c docker-compose.yml $your_stack_name
The v3 syntax removes a few features that do not apply to swarm mode, like links and dependencies. You should also note that volumes are stored local to a node by default. Otherwise the v3 syntax is very similar to the v2 syntax you may already be using. See ether following for more details:
https://docs.docker.com/compose/compose-file/
https://docs.docker.com/engine/swarm/
[ Original answer before v3 of the docker-compose.yml ]
For a single docker-compose.yml to deploy to multiple hosts, you need to use the standalone swarm (not the newer swarm mode, yet, this is rapidly changing). Spin up a swarm manager that has each host defined as members of its swarm, and then you can use constraints inside your docker-compose.yml to define which services run on which hosts.
You can also split up your docker-compose.yml into several files, one for each host, and then run multiple docker-compose up commands, with a different DOCKER_HOST value defined for each.
In both cases, you'll need to configure your docker installs to listen on the network, which should be done by configuring TLS on those sockets. This documentation describes what you need to do for that.
You can use docker compose version 3 which provides ability to do multi host deployment without using multiple compose files. All you need is to define labels for each node in the cluster and use the label name for placement constraint.
You may also want to consider the Overnode tool - it is container orchestration tool on top of automated multi-host docker-compose. It is the easiest transition from the single-host docker-compose deployment. (Disclaimer: I am an author and it was published recently)

Resources