Disable external node service accessibility in docker swarm - docker

I have a docker swarm with 2 nodes and each node run 2 services in global mode so each node have 2 services running inside it. My problem is how to force ubuntu service in node1 only connect to mysql service in node1 and dont use round-robin method to select mysql service.
so when I connect to mysql from ubuntu in node1 with mysql -hmysql -uroot -p it select only mysql in node1.
here is the docker-compose file which describes my case
version: '3.8'
services:
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
networks:
app-net: {}
deploy:
mode: global
ubuntu:
entrypoint: tail -f /dev/null
deploy:
mode: global
image: ubuntu:20.04
networks:
app-net: {}
networks:
app-net: {}
with this docker-compose file inside ubuntu container when I try to connect to mysql it selects mysql service in both nodes with round-robin algorithm.
What I try to achieve is to force each service be only visible to the services inside the same node.

I can't think of an easy way to achieve what you want in swarm with an overlay network. However, you can use unix socket instead of network. Just create a volume, mount it both into MySQL and your application, then make MySQL to put its socket onto that volume. Docker will create a volume on each node and thus you'll have your communication closed within node.
If you insist on using network communications, you can mount node's Docker socket into your app container and use it to find name of the container running MySQL on that node. Once you got the name, you can use it to connect to the particular instance of the service. Now, not only it is hard to make, it is also an anti-pattern and a security threat, so I don't recommend you to implement this idea.
At last there is also Kubernetes, where containers inside a pod can communicate with each other via localhost but I think you won't go that far, will you?

You should have a look mode=host.
You can bypass the routing mesh, so that when you access the bound port on a given node, you are always accessing the instance of the service running on that node. This is referred to as host mode. There are a few things to keep in mind.
ports:
- target: 80
published : 8080
protocol: tcp
mode: host

Unless I'm missing something, I would say you should not use global deploy and instead declare 2 ubuntu service and 2 mysql services in the compose file or deploy 2 separate stacks and in both cases use constraints to pin containers to specific node.
Example for first case would be something like this:
version: '3.8'
services:
mysql1:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
deploy:
placement:
constraints: [node.hostname == node1]
mysql2:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
deploy:
placement:
constraints: [node.hostname == node2]
ubuntu1:
entrypoint: tail -f /dev/null
image: ubuntu:20:04
deploy:
placement:
constraints: [node.hostname == node1]
ubuntu2:
entrypoint: tail -f /dev/null
image: ubuntu:20:04
deploy:
placement:
constraints: [node.hostname == node2]

Related

Containerized Rails application in Swarm access containerized database in compose

I have two virtual machines (VM) each machine is in a Docker Swarm environment, one VM has a mysql container running in docker-compose (for now let's say I can't move it to swarm), in the other machine I'm trying to connect a containerized rails app that is inside the swarm I'm using mysql2 gem to connect to the database however I'm having the following error:
Mysql2::Error::ConnectionError: Access denied for user 'bduser'#'10.0.13.248' (using password: YES)
I have double checked the credentials, I also ran an alpine container in this VM where the rails is running, installed mysql and succesfully connected to the db in the other VM (was not in swarm though). I checked the ip address and I'm not sure where this came from, it is not the ip for the db's container.
Compose file for the database:
version: '3.4'
services:
db:
image: mysql:5.7
restart: always
container_name: db-container
ports:
- "3306:3306"
expose:
- "3306"
environment:
MYSQL_ROOT_PASSWORD: mysecurepassword
command: --sql-mode STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION --max-connections 350
volumes:
- ./mysql:/var/lib/mysql
healthcheck:
test: mysqladmin ping --silent
interval: 1m30s
timeout: 10s
retries: 3
start_period: 30s
How can I successfully connect the rails app to the db's container, considering that the db is running using docker-compose and the rails is in a swarm in another VM?
If docker swarm mode is reduced to its core functionality: it adds overlay networks to docker. Also called vxlans these are software defined networks that containers can be attached to. overlay networks are the mechanisim that allow containers on different hosts to communicate with each other.
With that in mind, even if you otherwise treat your docker swarm as a set of discreet docker hosts on which you run compose stacks, you can nonetheless get services to communicate completely privately.
First, on a manager node, create an overlay network with a well known name:-
docker network create application --driver overlay
Now in your compose files, deployed as compose stacks on different nodes, you should be able to reference that network:
# deployed on node1
networks:
application:
external: true
services:
db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: mysql-password
networks:
- application
volumes:
- ./mysql/:/var/lib/mysql
# deployed on node2
networks:
application:
external: true
services:
my-rails-app:
image: my-rails:dev
build:
context: src
networks:
- application
volumes:
- ./data:/data
etc.

Connect Postgresql from Docker Swarm Container

I have 5 microservices which I intend to deploy over docker swarm cluster consisting of 3 nodes.
I also have a postgresql service running over one of the 3 servers(not dockerized but rather installed over the server) which I have. I did assign the network as "host" for all of the services but they simply refuse to start with no logs being generated.
version: '3.8'
services:
frontend-client:
image: xxx:10
container_name: frontend
restart: on-failure
deploy:
mode: replicated
replicas: 3
networks:
- "host"
ports:
- "xxxx:3000"
networks:
host:
name: host
external: true
I also did try to start a centos container from a server which does not have postgres installed and was able to ping as well as telnet the postgresql port as well using the Host network being assigned to it.
Can someone please help me narrow down the issue or look at the possibility which I might be missing???
docker swarm doesn't support "host" network_mode currently, so your best bet (and best practice) would be to pass your postgresql host ip address as an environment variable to the services using it.
if you are using docker-compose instead of docker swarm, you can set network_mode to host:
version: '3.8'
services:
frontend-client:
image: xxx:10
container_name: frontend
restart: on-failure
deploy:
mode: replicated
replicas: 3
network_mode: "host"
ports:
- "xxxx:3000"
notice i've removed networks part of your compose snippet and replaced it with network_mode.

Docker swarm containers connection issues

I am trying to use docker swarm to create simple nodejs service that lays behind Haproxy and connect to mysql. So, I created this docker compose file:
And I have several issues:
The backend service can't connect to the database using: localhost or 127.0.0.1, so, I managed to connect to the database using the private ip(10.0.1.4) of the database container.
The backend tries to connect to the database too soon even though it depends on it.
The application can't be reached from outside.
version: '3'
services:
db:
image: test_db:01
ports:
- 3306
networks:
- db
test:
image: test-back:01
ports:
- 3000
environment:
- SERVICE_PORTS=3000
- DATABASE_HOST=localhost
- NODE_ENV=development
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 5s
restart_policy:
condition: on-failure
max_attempts: 3
window: 60s
networks:
- web
- db
depends_on:
- db
extra_hosts:
- db:10.0.1.4
proxy:
image: dockercloud/haproxy
depends_on:
- test
environment:
- BALANCE=leastconn
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 80:80
networks:
- web
deploy:
placement:
constraints: [node.role == manager]
networks:
web:
driver: overlay
db:
driver: bridge
I am running the following:
docker stack deploy --compose-file=docker-compose.yml prod
All the services are running.
curl http://localhost/api/test <-- Not working
But, as I mentioned above the issues I have.
Docker version 18.03.1-ce, build 9ee9f40
docker-compose version 1.18.0, build 8dd22a9
What do I missing?
The backend service can't connect to the database using: localhost or 127.0.0.1, so, I managed to connect to the database using the private ip(10.0.1.4) of the database container.
don't use IP addresses for connection. Use just the DNS name.
So you must change connection to DATABASE_HOST=db, because this is the service name you've defined.
Localhost is wrong, because the service is running in a different container as your test service.
The backend tries to connect to the database too soon even though it depends on it.
depends_on does not work as you expected. Please read https://docs.docker.com/compose/compose-file/#depends_on and the info box "There are several things to be aware of when using depends_on:"
TL;DR: depends_on option is ignored when deploying a stack in swarm mode with a version 3 Compose file.
The application can't be reached from outside.
Where is your haproxy configuration that it must request for http://test:3000 when something requests haproxy on /api/test?
For DATABASE_HOST=localhost - the localhost word means my local container. You need to use the service name where db is hosted. localhost is a special dns name always pointing to the application host. when using containers - it will be the container. In cloud development, you need to forget about using localhost (will point to the container) or IPs (they can change every time you run the container and you will not be able to use load-balancing), and simply use service names.
As for the readiness - docker has no possibility of knowing, if the application you started in container is ready. You need to make the service aware of the database unavailability and code/script some mechanisms of polling/fault tolerance.
Markus is correct, so follow his advice.
Here is a compose/stack file that should work assuming your app listens on port 3000 in the container and db is setup with proper password, database, etc. (you usually set these things as environment vars in compose based on their Docker Hub readme).
Your app should be designed to crash/restart/wait if it can't fine the DB. That's the nature of all distributed computing... that anything "remote" (another container, host, etc.) can't be assumed to always be available. If your app just crashes, that's fine and a normal process for Docker, which will re-create the Swarm Service task each time.
If you could attempt to make this with public Docker Hub images, I can try to test for you.
Note that in Swarm, it's likely easier to use Traefik for the proxy (Traefik on Swarm Mode Guide), which will autoupdate and route incoming requests to the correct container based on the hostname you give the labels... But note that you should test first just the app and db, then after you know that works, try adding in a proxy layer.
Also, in Swarm, all your networks should be overlay, and you don't need to specify as that is the default in stacks.
Below is a sample using traefik with your above settings. I didn't give the test service a specific traefik hostname so it should accept all traffic coming in on 80 and forward to 3000 on the test service.
version: '3'
services:
db:
image: test_db:01
networks:
- db
test:
image: test-back:01
environment:
- SERVICE_PORTS=3000
- DATABASE_HOST=db
- NODE_ENV=development
networks:
- web
- db
deploy:
labels:
- traefik.port=3000
- traefik.docker.network=web
proxy:
image: traefik
networks:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "80:80"
- "8080:8080" # traefik dashboard
command:
- --docker
- --docker.swarmMode
- --docker.domain=traefik
- --docker.watch
- --api
deploy:
placement:
constraints: [node.role == manager]
networks:
web:
db:

How to use Docker Swarm Mode to share data between containers?

I have just started using docker . I was able to create a docker compose file which deploys three components of my application ,with the necessary number of replications in one host .
I want to replicate the same same thing ,with multiple hosts now .
I have three processes A[7 copies ] ,B [ 1 copy] ,C [1 Copy]
I followed the creating swarm tutorial on the docker website ,and managed to create a manager and attach two workers to it .
So now when I run my command
docker stack deploy --compose-file docker-compose.yml perf
It does spawn the required number of machines ,but all of them in the manager itself .
I would ideally want them to spawn C and B in the manager and ann the copies of A distributed between worker 1 and worker 2.
Here is my docker -compose file
version: '3'
services:
A:
image: A:host
tty: true
volumes:
- LogFilesLocationFolder:/jmeter/log
- AntOutLogFolder:/antout
- ZipFilesLocationFolder:/zip
deploy:
replicas: 7
placement:
constraints: [node.role == worker]
networks:
- perfhost
B:
container_name: s1_perfSqlDB
restart: always
tty: true
image: mysql:5.5
environment:
MYSQL_ROOT_PASSWORD: ''
volumes:
- mysql:/var/lib/mysql
ports:
- "3306:3306"
deploy:
placement:
constraints: [node.role == manager]
networks:
- perfhost
C:
container_name: s1_scheduler
image: C:host
tty: true
volumes:
- LogFilesLocationFolder:/log
- ZipFilesLocationFolder:/zip
- AntOutLogFolder:/antout
networks:
- perfhost
deploy:
placement:
constraints: [node.role == manager]
ports:
- "7000:7000"
networks:
perfhost:
volumes:
mysql:
LogFilesLocationFolder:
ZipFilesLocationFolder:
AntOutLogFolder:
B) And if I do get this working ,how do I use volumes to transfer data between Conatiner for Service A and container for Service B ,given that they are on different host machines
A few tips and answers:
for service names I don't recommend capital letters. Use valid DNS hostnames (lowercase, no special char except -).
container_name isn't supported in swarm and shouldn't be needed. Looks like C: should be something like scheduler, etc. Make the service names simple so they are easy to use/remember on their virtual network.
All services in a single compose file are always on the same docker network in swarm (and docker-compose for local development), so no need for the network assignment or listing.
restart:always isn't needed in swarm. That setting isn't used and is the default anyways. If you're using it for docker-compose, it's rarely needed as you usually don't want apps in a respawn loop during errors which will usually result in CPU race condition. I recommend leaving it off.
Volumes use a "volume driver". The default is local, just like normal docker commands. If you have shared storage you can use a volume driver plugin from store.docker.com to ensure shared storage is connected to the correct node.
If you're still having issues with worker/manager task assignment, put the output of docker node ls and maybe docker service ls and docker node ps <managername> for us to help troubleshoot.
First you should run
docker node ls
And check if all of your nodes are available. If they are, you should check if the workers have the images they need to run the containers.
I would also try with a constraint using the id of each node instead, you can see the ids with the previous command.
Run before docker stack deploy:
mkdir /srv/service/public
docker run --rm -v /srv/service/public:/srv/service/public my-container-with-data cp -R /var/www/app/public /srv/service/public
Use direcory /srv/service/public as volume in containers.

Docker Swarm Service Clustering

I want to use docker-compose with Docker Swarm (I use docker version 1.13 and compose with version: '3' syntax).
Is each service reachable as a "single" service to the other services? Here is an simplified example to be clear:
version: '3'
services:
nodejs:
image: mynodeapp
container_name: my_app
ports:
- "80:8080"
environment:
- REDIS_HOST=my_redis
- REDIS_PORT=6379
deploy:
mode: replicated
replicas: 3
networks:
- my_net
command: npm start
redis:
image: redis
container_name: my_redis
restart: always
expose:
- 6379
deploy:
mode: replicated
replicas: 2
networks:
- my_net
networks:
my_net:
external: true
Let's say I have 3 VMs which are configured as a swarm. So there is one nodejs container running on each VM but there are only two redis container.
On the VM where no redis is running: Will my nodejs container know about the redis?
Addiitonal questions:
When I set replicas: 4 for my redis, I will have two redis container on one VM: Will this be a problem for my nodejs app?
Last question:
When I set replicas: 4 for my nodeapp: Will this even work because I now have exposed two times port 80?
The services have to be stateless. In the case of databases it is necessary to set the cluster mode in each instance, since they are statefull.
In the same order you asked:
One service does not see another service as if it is made of replicas. Nodejs will see a unique Redis, which will have one IP, no matter in which node its replicas are located. That's the beauty of Swarm.
Yes, you can have Nodejs in one node and Redis in another node and they will be visible to each other. That's what the manager does; make the containers "believe" they are running on the same machine.
Also, you can have many replicas in the same node without a problem; they will be perceived as a whole. In fact, they use the same volume.
And last, as implication of (1), there will be no problem because you are not actually exposing port 80 twice. Even having 20 replicas, you have a unique entrypoint to your service, a particular IP:PORT direction.

Resources