Docker swarm make sure that one replica always access the same replica of other service - docker

Hey at first I'm not sure if this is possible at all. I have two different services in my docker swarm. Each service is replicated n times. Service A accesses service B via dns. Below you see a simplified version of my docker compose file:
version: "3.7"
services:
A:
image: <dockerimage_A>
deploy:
replicas: 5
B:
image: <dockerimage_B>
deploy:
replicas: 5
The replicas of service A accessing the replicas of service B via the DNS entry from docker ingress and send tasks to B. The runtime of the task of B variates and is blocking. Also the connection from A to B is blocking. Due to the round robin load balancing it could be the case that if one Replica combination of A and B is fast the fast A connects to another B is still blocked and the other B hasn't anything to do.
To solve this is would be ideal if one replica of A is always routed to the same replica of B. Is there a possibility to change the load balancing in that way?

I solved it on my own with the following hacky solution by setting the host name for each replica individually by using the slot id.
 services:
A:
hostname: "A-{{.Task.Slot}}"
deploy:
replicas: 2
B:
environment:
- SERVICEA=http://A-{{.Task.Slot}}/
deploy:
replicas: 2

Related

Docker Swarm ping by hostname incremental host.<id>

I have a service that requires that it can connect to the other instances of itself to establish a quorum.
The service has a environment variable like:
initialDiscoverMembers=db.1:5000,db.2:5000,db.3:5000
They can never find each other. I've tried logging into other containers and pinging other services by . like ping redis.1 and it doesn't work.
Is there a way in Docker (swarm) to get the incremental hostname working for connection as well? I looked at the endpoint_mode: dnsrr but that doesn't seem to be what I want.
I think I may have to just create three separate instances of the service and name it different things, but that seems so cumbersome.
You cannot refer independently to each container using the incremental host.<id> since the DNS resolution on Swarm is done on a service-basis; what you can do is to add a hostname alias to each container based on its Swarm slot.
For example, right now you're using a db service, so you could add:
version: '3.7'
services:
db:
image: postgres
deploy:
replicas: 3
hostname: "db-{{.Task.Slot}}"
ports:
- 5000:5432
In this case, since all the containers within each Swarm task are in the same network, you can address them by db-1, db-2 and db-3.

Docker stack deploy using overlay network - inconsistent behavior

I am deploying 2 containers (application and SQL) to the same network using a docker-compose.yml file (Swarm stack deploy).
Most of the time, the application has no problems talking to the SQL via its host name as a datasource in the connection string.
However, there are times where it simply can't find it. In order to debug it, I have verified that the overlay network is indeed created in each node, and when inspecting the network on each node, I see that the container does belong to this network.
Moreover, when I run docker exec command to enter the application container, I try to send a ping to the SQL container, and the host name does resolves to the correct IP, but still there is no response back.
This is extremely frustrating, as it only occurs from time to time.
Any suggestions of how to debug the issue ?
version: '3.2'
services:
sqlserver:
image: xxxx:5000/sql_image
hostname: sqlserver
deploy:
endpoint_mode: dnsrr
networks:
devnetwork:
aliases:
- sqlserver
test:
image: xxxx:5000/test
deploy:
endpoint_mode: dnsrr
deploy:
restart_policy:
condition: none
resources:
reservations:
memory: 2048M
networks:
- devnetwork
networks:
devnetwork:
driver: overlay
Service discovery and DNS problems on load are known bag in swarm mode. We have this problem a lot of times. You can discover open issues here and here.
If you run heavy use network application consider separate your worker and manager nodes. It's will help to manager execute service discovery well.
You can change the service discovery component and use something as Consul or ZooKeeper as part of your stack implementation.
I would consider using some service mesh for data-bind communication between services. Consul can do it for you. You can earn a lot of benefits from this design pattern. Security and encrypted data communication for example.

Docker prod and dev environment difference

Microservices have one db per service. On dev we use docker-compose to bring up whole environment, web, php, mysql, but what is good way to do this on production? when load increases we have to create several copies of application on different servers but they all should use same database,
whats best way to do this?
==========================
1 app and 1 db on dev (using docker-compose), 10 app and 1 db on prod. Since db contains all the data, that must be shared between copies of application
I know about Kubernetes and Docker swarm, but I am asking about general approach of separating db from application on prod, while keeping it together on dev
Under compose v3 files, you can specify a deploy, replicas value, which will replicate your services according to the properties and parameters you set. These can each talk to the single replica of your db by name. The overlay network they are attached to will route requests to the database by dns. For example:
version: "3.3"
services:
api1:
image: api1:latest
deploy:
replicas: 3
api2:
image: api2:latest
deploy:
replicas: 6
redis:
image: redis
deploy:
replicas: 1
... etc.
Now, if each service simply connects to a host called 'redis' (the service name) which is resolved by dns on the given network (which I haven't shown) on the default port. Note that it's not necessary to 'link' the services to the db -- a lot of examples do that but it's a deprecated practice. Also, you don't have to expose any ports, because the network traffic is internal you don't require ingress from a client external to the network.

Docker swarm stop spin up containers at 250

TLDR: Docker wont spin up more then 250 containers.
I'm deploying a cluster of 3 docker services to a swarm with 2 nodes. 2 of the services need to contain 1 container (have a replicas: 1 in the docker-compose file), and the third service need to have 300 containers (have a replicas: 300 setting).
The problem is it's spin up those 3 services, the first two with 1 container each (work like they should), and the third service spin up 248 containers out of 300 (I see this when I do docker service ls). I try to search if there is a limit of the service or swarm but couldn't find any.
I will much appreciate any help I can get.
If it's matter, each node with a 30GB RAM and 8 cores, and I use only 1/3 of the RAM.
I just figure it out. The problem is not with the service or the swarm, it's with the network.
When I use driver: overlay the default subnet is 10.0.0.0/24 which result in 254 address. So I change the mask in the subnet, to 22, which result in 1022 address, I added:
ipam:
config:
-subnet: 10.0.0.0/22
And now the network section in the docker-compose file looks like this:
networks:
web:
driver: overlay
ipam:
config:
- subnet: 10.0.0.0/22
I don't think you can spin up more than 250 for one service, due to the fact that all of them sitting on the same network would basically mean using up 250+ ip addresses. Unless you define a custom ipv6 network and try that, you're better off adding an extra network to your swarm, then spin a new identical service on that swarm but on the other network
TL; DR;
Define 2 networks and add them all to the service, then see if you can spin up more than 250 replicas

How to fetch Ips of a service in docker swarm cluster ?

I am running a docker swarm mode cluster with 2 nodes, and deploy 5 services : [ mysql , mongo , app ] and wish to filldb with an ansible script from my manager node. But I can not get the Ip from nodes to access db services in container ?
e.g:
mysql -h {{ mysql_service_host }} ....
how to get the container Ip or the service ip from node ?
is it possible to use mode host in docker swarm ?
For services (containers) that are part of the same network you can simply use the service name. Docker includes a DNS resolver that handles ip resolution. You will need to make your services part of an overlay network. An overlay network can span more than one node.
Eg:
services:
myapp:
image: myimage:1.0
deploy:
replicas: 1
networks:
- privnet
maindb:
image: mysql
deploy:
replicas: 1
networks:
- privnet
networks:
privnet:
driver: overlay
This creates an overlay network with two services. The corresponding containers could be created on any node. It doesn't matter where. They will all be able to communicate to each other since they're part of the same overlay network.
Within myapp, you can use maindb as a DNS for the mysql service. It will be resolved by Docker to the proper ip within the privnet network.
btw, a swarm cluster with 2 nodes doesn't make much sense. Swarm requires a minimum of 3 nodes for the Raft consensus protocol to work. https://raft.github.io

Resources