Docker compose containers communication - docker

I have 7 docker containers namely
es_search
postgres_db
fusionauth
mysql_db
auth
backend
ui
The communication between containers should be like below
fusionauth should be able to contact es_search, postgres_db
backend should be able to contact auth, mysql_db
auth should be able to contact fusionauth, backend
ui should be able to contact backend, auth
Existing docker-compose
version: '3.1'
services:
postgres_db:
container_name: postgres_db
image: postgres:9.6
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
ports:
- ${POSTGRES_PORT}:5432
networks:
- postgres_db
restart: unless-stopped
volumes:
- db_data:/var/lib/postgresql/data_test
es_search:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.1
container_name: es_search
environment:
- cluster.name=fusionauth
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=${ES_JAVA_OPTS}"
ports:
- ${ES1_PORT}:9200
- ${ES_PORT}:9300
networks:
- es_search
restart: unless-stopped
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es_data:/usr/share/elasticsearch/data
fusionauth:
image: fusionauth/fusionauth-app:latest
container_name: fusionauth
depends_on:
- postgres_db
- es_search
environment:
DATABASE_URL: jdbc:postgresql://postgres_db:5432/fusionauth
DATABASE_ROOT_USER: ${POSTGRES_USER}
DATABASE_ROOT_PASSWORD: ${POSTGRES_PASSWORD}
DATABASE_USER: ${DATABASE_USER}
DATABASE_PASSWORD: ${DATABASE_PASSWORD}
FUSIONAUTH_MEMORY: ${FUSIONAUTH_MEMORY}
FUSIONAUTH_SEARCH_SERVERS: http://es_search:9200
FUSIONAUTH_URL: http://fusionauth:9010
networks:
- postgres_db
- es_search
restart: unless-stopped
ports:
- ${FUSIONAUTH_PORT}:9011
volumes:
- fa_config:/usr/local/fusionauth/config
db:
container_name: db
image: mysql:5.7
volumes:
- /etc/nudjur/mysql_data:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
ports:
- ${MYSQL_PORT}:3306
command: --default-authentication-plugin=mysql_native_password
restart: on-failure
backend:
container_name: backend
links:
- db:${MYSQL_HOST}
depends_on:
- db
image: ${BACKEND_IMAGE}
volumes:
- ${ENV_FILE}:/backend/.env
ports:
- ${BACKEND_PORT}:${BACKEND_PORT}
command: >
bash -c "set -a && source .env && set +a"
restart: unless-stopped
UI:
container_name: UI
image: ${UI_IMAGE}
volumes:
- ${ENV_FILE}:/nudjur/.env
ports:
- ${UI_PORT}:${UI_PORT}
command: >
bash -c "PORT=${UI_PORT} npm start"
restart: unless-stopped
auth:
container_name: auth
network_mode: host
image: ${AUTH_IMAGE}
volumes:
- ${ENV_FILE}:/auth/.env
ports:
- ${AUTH_PORT}:${AUTH_PORT}
command: >
bash -c "set -a && source .env && set +a && python3 ./auth_bridge.py --log-level DEBUG run -p ${AUTH_PORT}"
restart: unless-stopped
networks:
postgres_db:
driver: bridge
es_search:
driver: bridge
volumes:
db_data:
es_data:
fa_config:
I am confused about how to establish communication between them.
Can someone help me with this?

I understand you want to restrict communications so that containers can only communicate with other services such as:
fusionauth should be able to contact es_search, postgres_db
backend should be able to contact auth, mysql_db
auth should be able to contact fusionauth, backend
ui should be able to contact backend, auth
You can use networks as you already partially do in your example to enable communication such as:
services on the same network can reach each other using the service name or alias - i.e. es_search can be reached by other services on the same network via http://es_search:9200
services on different network are isolated and cannot communicate with each other
You can then define your networks such as:
services:
postgres_db:
networks:
- postgres_db
es_search:
networks:
- es_search
# fusionauth should be able to contact es_search, postgres_db
fusionauth:
networks:
- fusionauth
- postgres_db
- es_search
db:
networks:
- mysql_db
# backend should be able to contact auth, mysql_db
backend:
networks:
- backend
- auth
- mysql_db
# ui should be able to contact backend, auth
UI:
networks:
- backend
- auth
# auth should be able to contact fusionauth, backend
auth:
networks:
- auth
- fusionauth
- backend
networks:
fusionauth:
backend:
auth:
postgres_db:
es_search:
mysql_db:
Where all services (except ui) have their own network, and another service must be on this service's network to communicate with it.
Note: I did not use link as it is a legacy and may be removed in future releases as stated by docs.

Delete every single last networks: and container_name: option from the docker-compose.yml file. network_mode: host as you have on the auth container is incompatible with Docker network communication, and seems to usually get presented as a workaround for port-publishing issues; delete that too. You probably want the name ui: to be in lower case as well.
When you do this, Docker Compose will create a single network named default and attach all of the containers it creates to that network. They will all be reachable by their service names. Networking in Compose in the Docker documentation describes this in more detail.
Your fusionauth container has the right basic setup. Trimming out some options:
fusionauth:
image: fusionauth/fusionauth-app:latest
depends_on:
- postgres_db
- es_search
environment:
# vvv These host names are other Compose service names
# vvv and default Compose networking makes them reachable
DATABASE_URL: jdbc:postgresql://postgres_db:5432/fusionauth
DATABASE_ET: CETERA
FUSIONAUTH_SEARCH_SERVERS: http://es_search:9200
FUSIONAUTH_URL: http://fusionauth:9010
restart: unless-stopped
ports:
- ${FUSIONAUTH_PORT}:9011
volumes:
- fa_config:/usr/local/fusionauth/config
# no networks: or container_name:
If the ui container presents something like a React or Angular front-end, remember that application runs in a browser and not in Docker, and so it will have to reach back to the physical system's DNS name or IP address and published ports:. It's common to introduce an nginx reverse proxy into this setup to serve both the UI code and the REST interface it needs to communicate with.
In principle you can set this up with multiple networks for more restrictive communications as the other answers have done. That could prevent ui from contacting postgres_db. I'd be a little surprised to see an environment where that level of control is required, and also where Compose is an appropriate deployment solution.
Also unnecessary, and frequently appearing in other questions like this, are hostname: (only sets containers' own notions of their own host names), links: (only relevant for pre-network Docker), and expose: (similarly).

from your docker-compose.yml.
postgres_db:
..
networks:
- postgres_db_net
fusionauth:
..
environment:
DATABASE_URL: jdbc:postgresql://postgres_db:5432/fusionauth
networks:
- postgres_db_net
The services postgres_db and fusionauth are attached to the network postgres_db-net which enables them to communicate with each other
communication happens by using the service-name which also works as hostname inside the container. the service fusionauth knows of the database by it's name postgres_db in the DATABASE_URL

Related

Traefik 2 network between 2 containers results in Gateway Timeout errors

I'm trying to set up 2 docker containers with docker-compose, 1 is a Traefik proxy and the other is a Vikunja kanban board container.
They both have their own docker-compose file. I can start the containers and the Traefik dashboard doesn't show any issues but when I open the URL in a browser I only get a Gateway Timeout error.
I have been looking at similar questions on here and different platforms and in nearly all other cases the issue was that they were placed on 2 different networks. However, I added a networks directive to the Traefik docker-compose.yml and still have this problem, unless I'm using them wrong.
The docker-compose file for the Vikunja container
(adapted from https://vikunja.io/docs/full-docker-example/)
version: '3'
services:
api:
image: vikunja/api
environment:
VIKUNJA_DATABASE_HOST: db
VIKUNJA_DATABASE_PASSWORD: REDACTED
VIKUNJA_DATABASE_TYPE: mysql
VIKUNJA_DATABASE_USER: vikunja
VIKUNJA_DATABASE_DATABASE: vikunja
VIKUNJA_SERVICE_JWTSECRET: REDACTED
VIKUNJA_SERVICE_FRONTENDURL: REDACTED
volumes:
- ./files:/app/vikunja/files
networks:
- web
- default
depends_on:
- db
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.vikunja-api.rule=Host(`subdomain.domain.de`) && PathPrefix(`/api/v1`, `/dav/`, `/.well-known/`)"
- "traefik.http.routers.vikunja-api.entrypoints=websecure"
- "traefik.http.routers.vikunja-api.tls.certResolver=myresolver"
frontend:
image: vikunja/frontend
labels:
- "traefik.enable=true"
- "traefik.http.routers.vikunja-frontend.rule=Host(`subdomain.domain.de`)"
- "traefik.http.routers.vikunja-frontend.entrypoints=websecure"
- "traefik.http.routers.vikunja-frontend.tls.certResolver=myresolver"
networks:
- web
- default
restart: unless-stopped
db:
image: mariadb:10
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
environment:
MYSQL_ROOT_PASSWORD: REDACTED
MYSQL_USER: vikunja
MYSQL_PASSWORD: REDACTED
MYSQL_DATABASE: vikunja
volumes:
- ./db:/var/lib/mysql
restart: unless-stopped
command: --max-connections=1000
networks:
- web
networks:
web:
external: true
The network directives for the api and frontend services in the Vikunja docker-compose.yml were present in the template (I added one for the db service for testing but it didn't have any effect).
networks:
- web
After getting a docker error about the network not being found I created it via docker network create web
The docker-compose file for the Traefik container
version: '3'
services:
traefik:
image: traefik:v2.8
ports:
- "80:80"
- "443:443"
- "8080:8080" # dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./letsencrypt:/letsencrypt
- ./traefik.http.yml:/etc/traefik/traefik.yml
networks:
- web
networks:
web:
external: true
I've tried adding the Traefik service to the Vikunja docker-compose.yml in one file but that didn't have any effect either.
I'm thankful for any pointers.
For debugging you could try to configure all container to use the host network to enusre they are realy on the same netwok.
i had a similar issue trying to run two different dockers and getting a
"Gateway Timeout". My issue was solved after changing the mapping in the second docker for traefik and accessing the site with :84 at the end (http://sitename:84)
traefik:
image: traefik:v2.0
container_name: "${PROJECT_NAME}_traefik"
command: --api.insecure=true --providers.docker
ports:
- '84:80'
- '8084:8080'

Docker network configuration endpoints

I want to know how to configure correctly the backend endpoint.
I have a docker images that runs different containers:
Backend
Frontend
Nginx for backend
DB
From my understanding, since all containers are running on the same machine, I should be able to reach the backend with "host.docker.internal".
Indeed I can successfully do it on the local machine where Docker is running on.
By the way the frontend is not able to resolve the endpoint "host.docker.internal" if I try to make a request from another machine. Please note that I'm able to reach the frontend from another machine, it's just a matter of endpoint configuration.
Note that "192.168.1.11" is the IP of the machine where Docker is running, and "8888" it's the port where the frontend is.
Obviously I can succesfully make the requests from other machines too if I put the static IP address instead of "host.docker.internal". But the question is: since the React frontend application is served on Docker itself, shouldn't it be able to resolve the "host.docker.internal" endpoint?
Just for reference, here it is my docker compose:
version: "3.8"
services:
db: #mysqldb
image: mysql:5.7
container_name: ${DB_SERVICE_NAME}
restart: unless-stopped
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
SERVICE_TAGS: dev
SERVICE_NAME: mysql
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- ./docker-compose/mysql:/docker-entrypoint-initdb.d
networks:
- backend
mrmfrontend:
build:
context: ./mrmfrontend
args:
- REACT_APP_API_BASE_URL=$CLIENT_API_BASE_URL
- REACT_APP_BACKEND_ENDPOINT=$REACT_APP_BACKEND_ENDPOINT
- REACT_APP_FRONTEND_ENDPOINT=$REACT_APP_FRONTEND_ENDPOINT
- REACT_APP_FRONTEND_ENDPOINT_ERROR=$REACT_APP_FRONTEND_ENDPOINT_ERROR
- REACT_APP_CUSTOMER=$REACT_APP_CUSTOMER
- REACT_APP_NAME=$REACT_APP_NAME
- REACT_APP_OWNER=""
ports:
- $REACT_LOCAL_PORT:$REACT_DOCKER_PORT
networks:
- frontend
volumes:
- ./docker-compose/nginx/frontend:/etc/nginx/conf.d/
app:
build:
args:
user: admin
uid: 1000
context: ./MRMBackend
dockerfile: Dockerfile
image: backend
container_name: backend-app
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./MRMBackend:/var/www
networks:
- backend
nginx:
image: nginx:alpine
container_name: backend-nginx
restart: unless-stopped
ports:
- 8000:80
volumes:
- ./MRMBackend:/var/www
- ./docker-compose/nginx/backend:/etc/nginx/conf.d/
networks:
- backend
- frontend
volumes:
db:
networks:
frontend:
driver: bridge
backend:
driver: bridge
The endpoint is configured in this way in the .env:
REACT_APP_BACKEND_ENDPOINT="http://host.docker.internal:8000"

Can not send post request to nuxeo server over docker container

I can send post request to Nuxeo server using http://localhost:8080 base address from local. When I add docker support to my app, my app can not send post request to nuxeo server using http://nuxeo_container_name:80 base address. It returns badRequest. How can I solve it? Nuxeo server and app are in the same docker network.
This is my docker-compose for nuxeo server. I use nuxeo_app_server in my app as nuxeo container name.
version: "3.5"
networks:
nuxnet:
name: network
services:
nginx:
container_name: nuxeo_app_server
build: nginx
ports:
# For localhost use, the exposed nginx port
# must match the localhost:port below in NUXEO_URL
- "8080:80"
#- "443:443"
cap_add:
- NET_BIND_SERVICE
links:
- nuxeo1
# - nuxeo2
environment:
USE_STAGING: 1
# default is 4096, but gcloud requires 2048
KEYSIZE: 2048
DOMAIN_LIST: /etc/nginx/conf.d/domains.txt
devices:
- "/dev/urandom:/dev/random"
sysctls:
- net.core.somaxconn=511
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- certs:/etc/ssl/acme
networks:
- nuxnet
restart: always
nuxeo1:
image: nuxeo
hostname: nuxeo1
links:
- redis
- es
- db
env_file:
- ./nuxeo/setup.env
environment:
# Each nuxeo container must have a unique cluster id
NUXEO_CLUSTER_ID: 1
# URL that a user would use to access nuxeo UI or API
# For localhost urls, the port must match the exposted nginx port above
NUXEO_URL: http://localhost:8080/nuxeo
# JAVA memory tuning -Xms, -Xmx
JVM_MS: 1024m
JVM_MX: 2048m
devices:
- "/dev/urandom:/dev/random"
volumes:
- ./nuxeo/init:/docker-entrypoint-initnuxeo.d:ro
- app-data:/var/lib/nuxeo
- app-logs:/var/log/nuxeo
networks:
- nuxnet
restart: always
redis:
# note: based on alpine:3.6
# see https://hub.docker.com/_/redis/
image: redis:3.2-alpine
volumes:
- redis-data:/data
networks:
- nuxnet
restart: always
es:
image: elasticsearch:2.4-alpine
volumes:
- es-data:/usr/share/elasticsearch/data
- es-plugins:/usr/share/elasticsearch/plugins
- es-config:/usr/share/elasticsearch/config
- ./elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
environment:
# settings below add -Xms400m -Xmx1g
ES_MIN_MEM: 500m
EX_MAX_MEM: 1g
security_opt:
- seccomp:unconfined
networks:
- nuxnet
restart: always
db:
image: postgres:9.6-alpine
# note mem tuning suggestions in the following two links
# https://doc.nuxeo.com/nxdoc/postgresql/
# https://doc.nuxeo.com/nxdoc/postgresql/#adapt-your-configuration-to-your-hardware
environment:
POSTGRES_USER: nuxeo
POSTGRES_PASSWORD: nuxeo
POSTGRES_DB: nuxeo
POSTGRES_INITDB_ARGS: "-E UTF8"
PGDATA: /var/lib/postgresql/data
volumes:
- db-store:/var/lib/postgresql/data
- ./postgresql/postgresql.conf:/etc/postgresql.conf:ro
command: postgres -c config_file=/etc/postgresql.conf
networks:
- nuxnet
restart: always
volumes:
# to view current data, run bin/view-data.sh
certs:
app-logs: # all server logs, can be shared between instances
app-data: # contains app data and packages (store cache), can be shared between instances
db-store: # postgres database
es-data:
es-plugins:
es-config:
redis-data:
I succeed to make REST request between two containers using container name and default port.
Did you try with URL : http://nuxeo1:8080 ?

Connection between docker containers as localhost

i am trying to dockerize my web application. i am running a apache webserver + mariadb and redis server as you can see in my docker-compose file combined with an nginx proxy to use local domains and ssl.
everything works fine as long is i use the container names to connect to mysql / redis. I dont want to change all localhosts in my code to the mysql / redis container names.
Is there a way to keep "localhost" as Host instead of the containers name?
version: "3.5"
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: portal-proxy
networks:
- portal
ports:
- "80:80"
- "443:443"
volumes:
- ./certs:/etc/nginx/certs
- /var/run/docker.sock:/tmp/docker.sock:ro
portal:
image: portal:latest
container_name: portal-webserver
networks:
- portal
volumes:
- ./portal:/var/www/html/portal
links:
- db
restart: always
environment:
VIRTUAL_HOST: portal.dev
db:
image: mariadb:latest
container_name: portal-db
networks:
- portal
ports:
- "3306:3306"
restart: always
environment:
MYSQL_DATABASE: portal
MYSQL_USER: www-data
MYSQL_PASSWORD: www-data
MYSQL_ROOT_PASSWORD: asdf1234
volumes:
- ./db:/docker-entrypoint-initdb.d
- ./db:/var/lib/mysql
redis:
image: redis:latest
container_name: portal-redis
environment:
- ALLOW_EMPTY_PASSWORD=yes
networks:
- portal
ports:
- "6379:6379"
networks:
portal:
name: portal
Use a common hostname (staging.docker.host) on all containers, that resolves to the docker host's ip 1.2.3.4.
So adding this to containers:
extra_hosts:
- "staging.docker.host:1.2.3.4"
and use that name (staging.docker.host) in all you connection endpoints.
On you local machine you also add (staging.docker.host) to your /etc/hosts or C:\Windows\System32\drivers\etc\hosts with localhost 127.0.0.1 staging.docker.host.

Unpredictable behavior of registrator and consul

I have very simple docker-compose config:
version: '3.5'
services:
consul:
image: consul:latest
hostname: "consul"
command: "consul agent -server -bootstrap-expect 1 -client=0.0.0.0 -ui -data-dir=/tmp"
environment:
SERVICE_53_IGNORE: 'true'
SERVICE_8301_IGNORE: 'true'
SERVICE_8302_IGNORE: 'true'
SERVICE_8600_IGNORE: 'true'
SERVICE_8300_IGNORE: 'true'
SERVICE_8400_IGNORE: 'true'
SERVICE_8500_IGNORE: 'true'
ports:
- 8300:8300
- 8400:8400
- 8500:8500
- 8600:8600/udp
networks:
- backend
registrator:
command: -internal consul://consul:8500
image: gliderlabs/registrator:master
depends_on:
- consul
links:
- consul
volumes:
- /var/run/docker.sock:/tmp/docker.sock
networks:
- backend
image_tagger:
build: image_tagger
image: image_tagger:latest
ports:
- 8000
networks:
- backend
mongo:
image: mongo
command: [--auth]
ports:
- "27017:27017"
restart: always
networks:
- backend
volumes:
- /mnt/data/mongo-data:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: qwerty
postgres:
image: postgres:11.1
# ports:
# - "5432:5432"
networks:
- backend
volumes:
- ./postgres-data:/var/lib/postgresql/data
- ./scripts:/docker-entrypoint-initdb.d
restart: always
environment:
POSTGRES_PASSWORD: qwerty
POSTGRES_DB: ttt
SERVICE_5432_NAME: postgres
SERVICE_5432_ID: postgres
networks:
backend:
name: backend
(and some other services)
Also I configured dnsmasq on host to access containers by internal name.
I spent couple of days, but still not able to make it stable:
1. Very often some services are just not get registered by registrator (sometimes I get 5 out of 15).
2. Very often containers are registered with wrong ip address. So in container info I have one address(correct), in consul - another (incorrect). And when I want to reach some service by address like myservice.service.consul I end up at wrong container.
3. Sometimes resolution fails at all even when containers are registered with correct ip.
Do I have some mistakes in config?
So, at least for now I was able to fix this by passing -resync 15 param to registrator. Not sure if it's correct solution, but it works.

Resources