Containerized Rails application in Swarm access containerized database in compose - ruby-on-rails

I have two virtual machines (VM) each machine is in a Docker Swarm environment, one VM has a mysql container running in docker-compose (for now let's say I can't move it to swarm), in the other machine I'm trying to connect a containerized rails app that is inside the swarm I'm using mysql2 gem to connect to the database however I'm having the following error:
Mysql2::Error::ConnectionError: Access denied for user 'bduser'#'10.0.13.248' (using password: YES)
I have double checked the credentials, I also ran an alpine container in this VM where the rails is running, installed mysql and succesfully connected to the db in the other VM (was not in swarm though). I checked the ip address and I'm not sure where this came from, it is not the ip for the db's container.
Compose file for the database:
version: '3.4'
services:
db:
image: mysql:5.7
restart: always
container_name: db-container
ports:
- "3306:3306"
expose:
- "3306"
environment:
MYSQL_ROOT_PASSWORD: mysecurepassword
command: --sql-mode STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION --max-connections 350
volumes:
- ./mysql:/var/lib/mysql
healthcheck:
test: mysqladmin ping --silent
interval: 1m30s
timeout: 10s
retries: 3
start_period: 30s
How can I successfully connect the rails app to the db's container, considering that the db is running using docker-compose and the rails is in a swarm in another VM?

If docker swarm mode is reduced to its core functionality: it adds overlay networks to docker. Also called vxlans these are software defined networks that containers can be attached to. overlay networks are the mechanisim that allow containers on different hosts to communicate with each other.
With that in mind, even if you otherwise treat your docker swarm as a set of discreet docker hosts on which you run compose stacks, you can nonetheless get services to communicate completely privately.
First, on a manager node, create an overlay network with a well known name:-
docker network create application --driver overlay
Now in your compose files, deployed as compose stacks on different nodes, you should be able to reference that network:
# deployed on node1
networks:
application:
external: true
services:
db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: mysql-password
networks:
- application
volumes:
- ./mysql/:/var/lib/mysql
# deployed on node2
networks:
application:
external: true
services:
my-rails-app:
image: my-rails:dev
build:
context: src
networks:
- application
volumes:
- ./data:/data
etc.

Related

Connect Postgresql from Docker Swarm Container

I have 5 microservices which I intend to deploy over docker swarm cluster consisting of 3 nodes.
I also have a postgresql service running over one of the 3 servers(not dockerized but rather installed over the server) which I have. I did assign the network as "host" for all of the services but they simply refuse to start with no logs being generated.
version: '3.8'
services:
frontend-client:
image: xxx:10
container_name: frontend
restart: on-failure
deploy:
mode: replicated
replicas: 3
networks:
- "host"
ports:
- "xxxx:3000"
networks:
host:
name: host
external: true
I also did try to start a centos container from a server which does not have postgres installed and was able to ping as well as telnet the postgresql port as well using the Host network being assigned to it.
Can someone please help me narrow down the issue or look at the possibility which I might be missing???
docker swarm doesn't support "host" network_mode currently, so your best bet (and best practice) would be to pass your postgresql host ip address as an environment variable to the services using it.
if you are using docker-compose instead of docker swarm, you can set network_mode to host:
version: '3.8'
services:
frontend-client:
image: xxx:10
container_name: frontend
restart: on-failure
deploy:
mode: replicated
replicas: 3
network_mode: "host"
ports:
- "xxxx:3000"
notice i've removed networks part of your compose snippet and replaced it with network_mode.

Disable external node service accessibility in docker swarm

I have a docker swarm with 2 nodes and each node run 2 services in global mode so each node have 2 services running inside it. My problem is how to force ubuntu service in node1 only connect to mysql service in node1 and dont use round-robin method to select mysql service.
so when I connect to mysql from ubuntu in node1 with mysql -hmysql -uroot -p it select only mysql in node1.
here is the docker-compose file which describes my case
version: '3.8'
services:
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
networks:
app-net: {}
deploy:
mode: global
ubuntu:
entrypoint: tail -f /dev/null
deploy:
mode: global
image: ubuntu:20.04
networks:
app-net: {}
networks:
app-net: {}
with this docker-compose file inside ubuntu container when I try to connect to mysql it selects mysql service in both nodes with round-robin algorithm.
What I try to achieve is to force each service be only visible to the services inside the same node.
I can't think of an easy way to achieve what you want in swarm with an overlay network. However, you can use unix socket instead of network. Just create a volume, mount it both into MySQL and your application, then make MySQL to put its socket onto that volume. Docker will create a volume on each node and thus you'll have your communication closed within node.
If you insist on using network communications, you can mount node's Docker socket into your app container and use it to find name of the container running MySQL on that node. Once you got the name, you can use it to connect to the particular instance of the service. Now, not only it is hard to make, it is also an anti-pattern and a security threat, so I don't recommend you to implement this idea.
At last there is also Kubernetes, where containers inside a pod can communicate with each other via localhost but I think you won't go that far, will you?
You should have a look mode=host.
You can bypass the routing mesh, so that when you access the bound port on a given node, you are always accessing the instance of the service running on that node. This is referred to as host mode. There are a few things to keep in mind.
ports:
- target: 80
published : 8080
protocol: tcp
mode: host
Unless I'm missing something, I would say you should not use global deploy and instead declare 2 ubuntu service and 2 mysql services in the compose file or deploy 2 separate stacks and in both cases use constraints to pin containers to specific node.
Example for first case would be something like this:
version: '3.8'
services:
mysql1:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
deploy:
placement:
constraints: [node.hostname == node1]
mysql2:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
deploy:
placement:
constraints: [node.hostname == node2]
ubuntu1:
entrypoint: tail -f /dev/null
image: ubuntu:20:04
deploy:
placement:
constraints: [node.hostname == node1]
ubuntu2:
entrypoint: tail -f /dev/null
image: ubuntu:20:04
deploy:
placement:
constraints: [node.hostname == node2]

Docker Swarm app deployed with stack is not accessible externally

I am new to Docker swarm and therefore unsure what to do.
I am trying to create a drupal site and a DB using stack in Swarm.
My configuration consists of 3 VMs (using Virtual Box) which are connected to the bridged network adapter and are on the same subnet as the host (10.0.0.X).
Once I have deployed the app there were no errors, however I was not able to access the site from the host.
I have also verified connectivity between all nodes on the following ports
7946/tcp, 7946/udp, and 4789/udp
What am I missing?
This is the compose file I am using:
version: '3.1'
services:
drupal:
container_name: drupal
image: drupal:8.2
ports:
- "8080:80"
networks:
- drupal_net
volumes:
- drupal-modules:/var/www/html/modules
- drupal-profiles:/var/www/html/profiles
- drupal-sites:/var/www/html/sites
- drupal-themes:/var/www/html/themes
postgres:
networks:
- drupal_net
container_name: postgres
image: postgres:9.6
secrets:
- psql-password
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/psql-password
volumes:
- drupal-data:/var/lib/postgresql/data
networks:
drupal_net:
driver: overlay
volumes:
drupal-data:
drupal-modules:
drupal-profiles:
drupal-sites:
drupal-themes:
secrets:
psql-password:
external:
name: psql-pw
Notes:
I have tried to add the host machine to the Swarm and was able to access the app only via localhost. If I were to enter one of nodes IP address, I still received no response.
From another device on the same network, the app was still inaccessible. (I was able to ping all nodes including the host)
I have tried to create to a single node Swarm and still there was no access to the app
All of the scenarios mentioned I have tested with a simple nginx app as well

how to connect my docker container (frontend) connect to a containerized database running on a different VM

Unable to connect to containers running on separate docker hosts
I've got 2 docker Tomcat containers running on 2 different Ubuntu vm's. System-A has a webservice running and System-B has a db. I haven't been able to figure out how to connect the application running on system-A to the db running on system-B. When I run the database on system-A, the application(which is also running on system-A) can connect to the database. I'm using docker-compose to setup the network(which works fine when both containers are running on the same VM). I've execd into etc/hosts file in the application container on system-A and I think whats missing is the ip address of System-B.
services:
db:
image: mydb
hostname: mydbName
ports:
- "8012: 8012"
networks:
data:
aliases:
- mydbName
api:
image: myApi
hostname: myApiName
ports:
- "8810: 8810"
networks:
data:
networks:
data:
You would configure this exactly the same way you would as if Docker wasn't involved: configure the Tomcat instance with the DNS name or IP address of the other server. You would need to make sure the service is published outside of Docker space using a ports: directive.
On server-a.example.com you could run this docker-compose.yml file:
version: '3'
services:
api:
image: myApi
ports:
- "8810:8810"
env:
DATABASE_URL: "http://server-b.example.com:8012"
And on server-b.example.com:
version: '3'
services:
db:
image: mydb
ports:
- "8012:8012"
In principle it would be possible to set up an overlay network connecting the two hosts, but this is a significantly more complicated setup.
(You definitely don't want to use docker exec to modify /etc/hosts in a container: you'll have to repeat this step every time you delete and recreate the container, and manually maintaining hosts files is tedious and error-prone, particularly if you're moving containers between hosts. Consul could work as a service-discovery system that provides a DNS service.)

Connect two instances of docker-compose [duplicate]

This question already has answers here:
Communication between multiple docker-compose projects
(20 answers)
Closed 4 months ago.
I have a dockerized application with a few services running using docker-compose. I'd like to connect this application with ElasticSearch/Logstash/Kibana (ELK) using another docker-compose application, docker-elk. Both of them are running in the same docker machine in development. In production, that will probably not be the case.
How can I configure my application's docker-compose.yml to link to the ELK stack?
Update Jun 2016
The answer below is outdated starting with docker 1.10. See this other similar answer for the new solution.
https://stackoverflow.com/a/34476794/1556338
Old answer
Create a network:
$ docker network create --driver bridge my-net
Reference that network as an environment variable (${NETWORK})in the docker-compose.yml files. Eg:
pg:
image: postgres:9.4.4
container_name: pg
net: ${NETWORK}
ports:
- "5432"
myapp:
image: quay.io/myco/myapp
container_name: myapp
environment:
DATABASE_URL: "http://pg:5432"
net: ${NETWORK}
ports:
- "3000:3000"
Note that pg in http://pg:5432 will resolve to the ip address of the pg service (container). No need to hardcode ip addresses; An entry for pg is automatically added to the /etc/host of the myapp container.
Call docker-compose, passing it the network you created:
$ NETWORK=my-net docker-compose up -d -f docker-compose.yml -f other-compose.yml
I've created a bridge network above which only works within one node (host). Good for dev. If you need to get two nodes to talk to each other, you need to create an overlay network. Same principle though. You pass the network name to the docker-compose up command.
You could also create a network with docker outside your docker-compose :
docker network create my-shared-network
And in your docker-compose.yml :
version: '2'
services:
pg:
image: postgres:9.4.4
container_name: pg
expose:
- "5432"
networks:
default:
external:
name: my-shared-network
And in your second docker-compose.yml :
version: '2'
myapp:
image: quay.io/myco/myapp
container_name: myapp
environment:
DATABASE_URL: "http://pg:5432"
net: ${NETWORK}
expose:
- "3000"
networks:
default:
external:
name: my-shared-network
And both instances will be able to see each other, without open ports on host, you just need to expose ports, and there will see each other through the network : "my-shared-network".
If you set a predictable project name for the first composition you can use external_links to reference external containers by name from a different compose file.
In the next docker-compose release (1.6) you will be able to use user defined networks, and have both compositions join the same network.
Take a look at multi-host docker networking
Networking is a feature of Docker Engine that allows you to create
virtual networks and attach containers to them so you can create the
network topology that is right for your application. The networked
containers can even span multiple hosts, so you don’t have to worry
about what host your container lands on. They seamlessly communicate
with each other wherever they are – thus enabling true distributed
applications.
I didn't find any complete answer, so decided to explain it in a complete and simple way.
To connect two docker-compose you need a network and putting both docker-composes in that network,
you could create netwrok with docker network create name-of-network,
or you could simply put network declaration in networks option of docker-compose file and when you run docker-compose (docker-compose up) the network would be created automatically.
put the below lines in both docker-compose files
networks:
net-for-alpine:
name: test-db-net
Note: net-for-alpine is internal name of the network and it will be used inside of the docker-compose files and could be different,
test-db-net is external name of the network and must be same in two docker-compose files.
Assume we have docker-compose.db.yml and docker-compose.alpine.yml
docker-compose.apline.yml would be:
version: '3.8'
services:
alpine:
image: alpine:3.14
container_name: alpine
networks:
- net-for-alpine
# these two command keeps apline container running
stdin_open: true # docker run -i
tty: true # docker run -t
networks:
net-for-alpine:
name: test-db-net
docker-compose.db.yml would be:
version: '3.8'
services:
db:
image: postgres:13.4-alpine
container_name: psql
networks:
- net-for-db
networks:
net-for-db:
name: test-db-net
To test the network, go inside alpine container
docker exec -it alpine sh
then with following commands you can check the network
# if it returns 0 or see nothing as a result, network is established
nc -z psql (container name)
or
ping pgsql

Resources