Docker swarm containers connection issues - docker

I am trying to use docker swarm to create simple nodejs service that lays behind Haproxy and connect to mysql. So, I created this docker compose file:
And I have several issues:
The backend service can't connect to the database using: localhost or 127.0.0.1, so, I managed to connect to the database using the private ip(10.0.1.4) of the database container.
The backend tries to connect to the database too soon even though it depends on it.
The application can't be reached from outside.
version: '3'
services:
db:
image: test_db:01
ports:
- 3306
networks:
- db
test:
image: test-back:01
ports:
- 3000
environment:
- SERVICE_PORTS=3000
- DATABASE_HOST=localhost
- NODE_ENV=development
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 5s
restart_policy:
condition: on-failure
max_attempts: 3
window: 60s
networks:
- web
- db
depends_on:
- db
extra_hosts:
- db:10.0.1.4
proxy:
image: dockercloud/haproxy
depends_on:
- test
environment:
- BALANCE=leastconn
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 80:80
networks:
- web
deploy:
placement:
constraints: [node.role == manager]
networks:
web:
driver: overlay
db:
driver: bridge
I am running the following:
docker stack deploy --compose-file=docker-compose.yml prod
All the services are running.
curl http://localhost/api/test <-- Not working
But, as I mentioned above the issues I have.
Docker version 18.03.1-ce, build 9ee9f40
docker-compose version 1.18.0, build 8dd22a9
What do I missing?

The backend service can't connect to the database using: localhost or 127.0.0.1, so, I managed to connect to the database using the private ip(10.0.1.4) of the database container.
don't use IP addresses for connection. Use just the DNS name.
So you must change connection to DATABASE_HOST=db, because this is the service name you've defined.
Localhost is wrong, because the service is running in a different container as your test service.
The backend tries to connect to the database too soon even though it depends on it.
depends_on does not work as you expected. Please read https://docs.docker.com/compose/compose-file/#depends_on and the info box "There are several things to be aware of when using depends_on:"
TL;DR: depends_on option is ignored when deploying a stack in swarm mode with a version 3 Compose file.
The application can't be reached from outside.
Where is your haproxy configuration that it must request for http://test:3000 when something requests haproxy on /api/test?

For DATABASE_HOST=localhost - the localhost word means my local container. You need to use the service name where db is hosted. localhost is a special dns name always pointing to the application host. when using containers - it will be the container. In cloud development, you need to forget about using localhost (will point to the container) or IPs (they can change every time you run the container and you will not be able to use load-balancing), and simply use service names.
As for the readiness - docker has no possibility of knowing, if the application you started in container is ready. You need to make the service aware of the database unavailability and code/script some mechanisms of polling/fault tolerance.

Markus is correct, so follow his advice.
Here is a compose/stack file that should work assuming your app listens on port 3000 in the container and db is setup with proper password, database, etc. (you usually set these things as environment vars in compose based on their Docker Hub readme).
Your app should be designed to crash/restart/wait if it can't fine the DB. That's the nature of all distributed computing... that anything "remote" (another container, host, etc.) can't be assumed to always be available. If your app just crashes, that's fine and a normal process for Docker, which will re-create the Swarm Service task each time.
If you could attempt to make this with public Docker Hub images, I can try to test for you.
Note that in Swarm, it's likely easier to use Traefik for the proxy (Traefik on Swarm Mode Guide), which will autoupdate and route incoming requests to the correct container based on the hostname you give the labels... But note that you should test first just the app and db, then after you know that works, try adding in a proxy layer.
Also, in Swarm, all your networks should be overlay, and you don't need to specify as that is the default in stacks.
Below is a sample using traefik with your above settings. I didn't give the test service a specific traefik hostname so it should accept all traffic coming in on 80 and forward to 3000 on the test service.
version: '3'
services:
db:
image: test_db:01
networks:
- db
test:
image: test-back:01
environment:
- SERVICE_PORTS=3000
- DATABASE_HOST=db
- NODE_ENV=development
networks:
- web
- db
deploy:
labels:
- traefik.port=3000
- traefik.docker.network=web
proxy:
image: traefik
networks:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "80:80"
- "8080:8080" # traefik dashboard
command:
- --docker
- --docker.swarmMode
- --docker.domain=traefik
- --docker.watch
- --api
deploy:
placement:
constraints: [node.role == manager]
networks:
web:
db:

Related

Cannot connect to Mysql using Docker

I build a website using Strapi and Gatsby, everythings works well when I try to connect to a remote database, but I'm trying to create a db inside a container and so far no luck.
Essentially, what I did is create the following docker-compose:
version: '3'
services:
backend:
container_name: myapp_backend
build: ./backend/
ports:
- '3002:3002'
volumes:
- ./backend:/usr/src/myapp/backend
- /usr/src/myapp/backend/node_modules
environment:
- APP_NAME=myapp_backend
- DATABASE_CLIENT=mysql
- DATABASE_HOST=db
- DATABASE_PORT=3307
- DATABASE_NAME=myapp_db
- DATABASE_USERNAME=johnny
- DATABASE_PASSWORD=stecchino
- DATABASE_SSL=false
- DATABASE_AUTHENTICATION_DATABASE=myapp_db
- HOST=localhost
depends_on:
- db
restart: always
db:
container_name: myapp_mysql
image: mysql:5.7
volumes:
- ./db.sql:/docker-entrypoint-initdb.d/db.sql
restart: always
ports:
- 3307:3307
environment:
MYSQL_ROOT_PASSWORD: 5!JF6!FgAkvt
MYSQL_DATABASE: myapp_db
MYSQL_USER: johnny
MYSQL_PASSWORD: stecchino
command: mysqld --character-set-server=utf8 --collation-server=utf8_general_ci --init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: 'myapp_phpmyadmin'
links:
- db
environment:
PMA_HOST: db
PMA_PORT: 3307
ports:
- '8081:80'
volumes:
- /sessions
depends_on:
- db
frontend:
container_name: myapp_frontend
build: ./frontend/
ports:
- '3001:3001'
depends_on:
- backend
volumes:
- ./frontend:/usr/src/myapp/frontend
the backend service contains the Strapi application, the db service contains the mysql instance which runs on the port 3307 'cause 3306 is already in use.
Then I have also installed phpmyadmin, and last but not least the Gastby site. When I run using docker-compose up --build, and try to access to phpmyadmin using:
http://localhost:8081/index.php
with the following credentials:
user: johnny
pwd: stecchino
I get:
MySQL mysqli::real_connect():(HY000/2002): Connection refused
now, what I did for fix that situation is pass the port 3306 instead of 3307 to backend and phpmyadmin service. And magically, everything works. But why? I have mapped container and host to 3307...
There are 2 things happening here.
Mysql is running on port 3306.
This is because you never told the mysql container to run on port 3307. The default configuration is running on 3306.
phpadmin can connect to mysql at port 3306.
Of course it can. This is because when you define multiple services within the same docker-compose file, they start on the same network. This means that they can see and connect to each other's internal ports without the need for external port binding like 3306:3306
I would suggest to keep port bindings only for services that you want access outside the docker environment (like the UI), and for internal components just expose the port like this
expose:
- 3306
Both answers are useful, I am particularly fond of Manish's answer
I wanted to add some additional wording:
There are the internal docker networks which nothing from the outside can gain access to. From inside any given service (or container), you can reach every other service (or container) via:
<service-name>:<port>/path/of/resources
<container-name>:<port>/path/of/resources
In order to access resources inside the docker network from outside of docker, whether that is from your host environment, or farther upstream on the internet, the docker daemon needs to bind to host ports, and then forward information received on those ports to a docker service (and ultimately a docker container).
In your docker-compose.yml when you do the 3307:3307 you are telling the docker daemon to listen on port 3307, and forward to your db service internally on it's port 3307.
However, from what we can all see, mysql is still internally (that is, inside the container) listening for traffic on port 3306. Any containers or services on the same docker networks as your db service (mysql running container(s)) would be able to access mysql via something like:
<driver>:mysql://db:3306/<dbname>
If you wanted all host traffic and docker network traffic to access mysql on port 3307, you would also need to configure mysql to listen on port 3307 instead of 3306. That tidbit of information does not appear to be in your question at the time of writing.
I hope the additional information helps! It's a topic I chat often about when talking docker with folks.
Because 3306 is the exposed port by the official Dockerfile.
What you can do is to map the port that is running MySQL to another port on your host: 3307:3306 for instance (always host:container)

Docker Swarm app deployed with stack is not accessible externally

I am new to Docker swarm and therefore unsure what to do.
I am trying to create a drupal site and a DB using stack in Swarm.
My configuration consists of 3 VMs (using Virtual Box) which are connected to the bridged network adapter and are on the same subnet as the host (10.0.0.X).
Once I have deployed the app there were no errors, however I was not able to access the site from the host.
I have also verified connectivity between all nodes on the following ports
7946/tcp, 7946/udp, and 4789/udp
What am I missing?
This is the compose file I am using:
version: '3.1'
services:
drupal:
container_name: drupal
image: drupal:8.2
ports:
- "8080:80"
networks:
- drupal_net
volumes:
- drupal-modules:/var/www/html/modules
- drupal-profiles:/var/www/html/profiles
- drupal-sites:/var/www/html/sites
- drupal-themes:/var/www/html/themes
postgres:
networks:
- drupal_net
container_name: postgres
image: postgres:9.6
secrets:
- psql-password
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/psql-password
volumes:
- drupal-data:/var/lib/postgresql/data
networks:
drupal_net:
driver: overlay
volumes:
drupal-data:
drupal-modules:
drupal-profiles:
drupal-sites:
drupal-themes:
secrets:
psql-password:
external:
name: psql-pw
Notes:
I have tried to add the host machine to the Swarm and was able to access the app only via localhost. If I were to enter one of nodes IP address, I still received no response.
From another device on the same network, the app was still inaccessible. (I was able to ping all nodes including the host)
I have tried to create to a single node Swarm and still there was no access to the app
All of the scenarios mentioned I have tested with a simple nginx app as well

Multiple docker networks on a single container

I have a docker stack configuration with an overlay network called traefik. It contains my traefik reverse-proxy container and then several containers that serve various subdomains. I'm adding a new one that'll need to access a database server that I've created in another container, so I added something like this:
networks:
traefik:
driver: overlay
database:
driver: overlay
services:
traefik:
image: traefik
networks:
- traefik
ports:
- 80:80
- 443:443
- 8080:8080
volumes:
# ...
# ...
database:
image: postgres:9.6-alpine # match what's being used by heroku
networks:
- database
staging:
image: staging
networks:
- traefik
- database
deploy:
labels:
traefik.enable: "true"
traefik.frontend.rule: Host:lotto-ticket.example.com
traefik.docker.network: traefik
traefik.port: 3000
When I did this, the reverse proxy started returning Gateway Timeout codes, indicating that the staging container was no longer available to the traefik network. If I remove the database network from the staging container, the subdomain serves up as expected (although it obviously fails to access the database), and when I add it back in, the reverse-proxy starts getting timeouts from it.
Is there some kind of conflict between the two networks? Do I need to configure IP routing in order to use multiple docker networks on a single container?
EDIT: added more detail to the config excerpt
You need to tell traefik on which network to connect to your service using the label traefik.docker.network. In your case, that label would be set to ${stack_name}_traefik where ${stack_name} is the name of your stack.

Connecting to 'localhost' or '127.0.0.1' with a Docker/docker-compose setup

Seems to be a common question but with different contexts but I'm having a problem connecting to my localhost DB when using Docker.
If I inspect the mysql container using docker inspect and find the IP address and use this as the DB host as part of the CMS, it runs fine... the only issue is the mysql container IP address changes (upon eachdocker-compose up and if I change wifi networks) so ideally I'd like to use 'localhost' or '127.0.0.1' but for some reason this results in a SQLSTATE[HY000] [2002] Connection refused error.
How can I use 'localhost' or '127.0.0.1' as the DB hostname in CMS applications so I don't have to keep changing it as the container IP address changes?
This is my docker-compose.yml file:
version: "3"
services:
webserver:
build:
context: ./bin/webserver
restart: 'always'
ports:
- "80:80"
- "443:443"
links:
- mysql
volumes:
- ${DOCUMENT_ROOT-./www}:/var/www/html
- ${PHP_INI-./config/php/php.ini}:/usr/local/etc/php/php.ini
- ${VHOSTS_DIR-./config/vhosts}:/etc/apache2/sites-enabled
- ${LOG_DIR-./logs/apache2}:/var/log/apache2
networks:
mynet:
aliases:
- john.dev
mysql:
image: 'mysql:5.7'
restart: 'always'
ports:
- "3306:3306"
volumes:
- ${MYSQL_DATA_DIR-./data/mysql}:/var/lib/mysql
- ${MYSQL_LOG_DIR-./logs/mysql}:/var/log/mysql
environment:
MYSQL_ROOT_PASSWORD: example
networks:
- mynet
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- mysql
environment:
PMA_HOST: mysql
PMA_PORT: 3306
ports:
- '8080:80'
volumes:
- /sessions
networks:
- mynet
networks:
mynet:
Try using mysql instead of localhost.
You are defining a link between webserver container and mysql container, so webserver container is able to resolve mysql IP.
According to Docker documentation:
Docker Cloud gives your containers two ways find other services:
Using service and container names directly as hostnames
Using service links, which are based on Docker Compose links
Service and Container Hostnames update automatically when a service
scales up or down or redeploys. As a user, you can configure service
names, and Docker Cloud uses these names to find the IP of the
services and containers for you. You can use hostnames in your code to
provide abstraction that allows you to easily swap service containers
or components.
Service links create environment variables which allow containers to
communicate with each other within a stack, or with other services
outside of a stack. You can specify service links explicitly when you
create a new service or edit an existing one, or specify them in the
stackfile for a service stack.
From Docker compose documentation:
Containers for the linked service are reachable at a hostname identical to the alias, or the service name if no alias was specified.

traefik hostname works for web apps but not for mongodb

I'm running a mongo instance with docker-compose and traefik.
myapp-mongo:
build: ../images/myapp-mongo
restart: always
ports:
- "27017:27017"
labels:
- "traefik.ports=27017,27018"
- "traefik.backend=myapp-mongo"
- "traefik.frontend.rule=Host:myapp-mongo.docker.localhost"
networks:
- development
environment:
- MONGO_USER=${MONGO_USER}
- MONGO_PASSWD=${MONGO_PASSWD}
- MONGO_AUTHDB=${MONGO_AUTHDB}
Mongo is running fine and I can connect using 127.0.0.1 from my Mac.
The problem is that I can't connect using hostname myapp-mongo.docker.localhost. It only works using IP 127.0.0.1.
Trying to ping the IP 127.0.0.1 responds ok, but trying to ping the hostname doesn't work.
I've already added 127.0.0.1 proxy.docker.localhost into /etc/hosts to get traefik working.
All other web apps has hostnames working fine like eg myapp.docker.localhost. This problem is only happening with this mongodb container.
Probably because Træfik is HTTP proxy and so will only support HTTP/HTTPS connections.
I believe #bpatel is right (see comment I left on his answer with link to github conversation) Traefik at the time of writing only supports HTTP/HTTPS.
Solution using native docker networks
However, you can get around this issue! Since you are using docker, you can work around by using the container name in your code (assuming mongo and your mongo accessing code are both running in containers on a shared docker network. This will be the case if the containers are spun up with docker-compose). Run the following to see if your containers are linked up correctly:
run docker ps to get your container names running (under the NAMES column)
run docker network ls to see your network names
run docker network inspect <target_network_name> to verify your containers from step 1 are on the same network.
I run docker-compose from three separate compose files, so you should be able to cover most cases from the following (apologies for any syntax errors, the following are stripped down code examples):
Entire docker-compose file that that starts up traefik (under directory name 'proxy')
version: '2'
services:
traefik:
image: traefik
command: --web --docker --docker.domain=docker.localhost --logLevel=DEBUG
networks:
- webgateway
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml
networks:
webgateway:
driver: bridge
snippet from my docker-compose file that spins up mongo
version: '2'
services:
database:
image: mongo
ports:
- "27017:27017"
networks:
- web
networks:
web:
external:
name: proxy_webgateway
snippet from docker-compose that has mongo accessing code
version: '2'
services:
topicOntologyBuilder:
image: topic-ontology-builder
labels:
- "traefik.backend=topicOntologyBuilder"
- "traefik.port=80"
- "traefik.frontend.rule=Host:topic-ontology.docker.localhost"
networks:
- web
volumes:
- ./:/home
networks:
web:
external:
name: proxy_webgateway
Connection in Code
Not certain what language you're using, this is what the following js code looked like for me to connect to mongo (inside that 'topicOntologyBuilder` container, while using traefik as the proxy (again, this works because we're making the most of docker networks):
var MongoClient = require('mongodb').MongoClient;
MongoClient.connect('mongodb://<MONGO_CONTAINER_NAME>/<DB_NAME>', function(err, db) {
//insert code here to interact with mongo
})
Why this works
This works because docker does some clever DNS stuff within the containers so that each container knows the IP of other containers, by looking it up in their DNS entry, by the container names
Extra intel
If your containers are on separate computers/vm's, you'll probably want to play around with a service discovery tool (Consul plays well with Traefik) or do something fancy with a docker network overlay which is specific for containers in a cluster.
If using raw docker networks, you can assign container aliases (this doesn't work with Traefik though, or at least it didn't a couple months back).

Resources