I am starting Zookeeper, kafka and kafdrop with docker-compose in local, everything is works.
when I want to do the same thing inside EC2 instance I get this error.
the EC2 type that I'm using is t2.micro with an OBS in the default VPC and Subnet.
docker-compose.yaml
version: "2"
services:
kafdrop:
image: obsidiandynamics/kafdrop
container_name: kafka-web
restart: "no"
ports:
- "9000:9000"
environment:
KAFKA_BROKERCONNECT: "kafka:9092"
JVM_OPTS: "-Xms16M -Xmx48M -Xss180K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify"
depends_on:
- "kafka"
networks:
- nesjs-network
zookeeper:
image: 'docker.io/bitnami/zookeeper:3-debian-10'
container_name: zookeeper
ports:
- 2181:2181
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
- nesjs-network
kafka:
image: 'docker.io/bitnami/kafka:2-debian-10'
container_name: kafka
ports:
- 9092:9092
- 9093:9093
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:9093
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9092,EXTERNAL://kafka:9093
- KAFKA_INTER_BROKER_LISTENER_NAME=CLIENT
depends_on:
- zookeeper
networks:
- nesjs-network
`
this docker-compos.yaml is working in may local without any issue but she doesn't in my EC2 instance
The problem is in EC2 configuration level.
kafka and kafdrop needs some specific resources such as RAM and vCpu.
instead t2.micro use t2.medium with a volume OBS 30Mo and other resources(vpc subnet sg) by default.
this config work for me.
I have 3 containers that I communicate with docker swarm, if I run my application with http and connect as http //domain.com everything works fine, but if I use https ( //www.domain.com) I can't communicate my frontend with the backend and I get the following error:
Ajax.js: 10 POST https //www.domain.com/Init net :: ERR_NAME_NOT_RESOLVED
can someone help me solve my problem
and understand the mistake
Thank you
I leave my compose
version: '3'
services:
ssl:
image: danieldent/nginx-ssl-proxy
restart: always
environment:
UPSTREAM: myApp:8086
SERVERNAME: dominio.com
ports:
- 80:80/tcp
- 443:443/tcp
depends_on:
- myApp
volumes:
- ./nginxAPP:/etc/letsencrypt
- ./nginxAPP:/etc/nginx/user.conf.d:ro
bdd:
restart: always
image: postgres:12
ports:
- 5432:5432/tcp
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: 12345
POSTGRES_DB: miBDD
volumes:
- ./pgdata:/var/lib/postgresql/data
pgadmin:
image: dpage/pgadmin4
ports:
- 9095:80/tcp
environment:
PGADMIN_DEFAULT_EMAIL: user
PGADMIN_DEFAULT_PASSWORD: 12345
PROXY_X_FOR_COUNT: 3
PROXY_X_PROTO_COUNT: 3
PROXY_X_HOST_COUNT: 3
PROXY_X_PORT_COUNT: 3
volumes:
- ./pgadminAplicattion:/var/lib/pgadmin
myApp:
restart: always
image: appImage
ports:
- 8086:8086
depends_on:
- bdd
working_dir: /usr/myApp
environment:
CONFIG_PATH: ../configuation
command: "node server.js"
I have 7 docker containers namely
es_search
postgres_db
fusionauth
mysql_db
auth
backend
ui
The communication between containers should be like below
fusionauth should be able to contact es_search, postgres_db
backend should be able to contact auth, mysql_db
auth should be able to contact fusionauth, backend
ui should be able to contact backend, auth
Existing docker-compose
version: '3.1'
services:
postgres_db:
container_name: postgres_db
image: postgres:9.6
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
ports:
- ${POSTGRES_PORT}:5432
networks:
- postgres_db
restart: unless-stopped
volumes:
- db_data:/var/lib/postgresql/data_test
es_search:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.1
container_name: es_search
environment:
- cluster.name=fusionauth
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=${ES_JAVA_OPTS}"
ports:
- ${ES1_PORT}:9200
- ${ES_PORT}:9300
networks:
- es_search
restart: unless-stopped
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es_data:/usr/share/elasticsearch/data
fusionauth:
image: fusionauth/fusionauth-app:latest
container_name: fusionauth
depends_on:
- postgres_db
- es_search
environment:
DATABASE_URL: jdbc:postgresql://postgres_db:5432/fusionauth
DATABASE_ROOT_USER: ${POSTGRES_USER}
DATABASE_ROOT_PASSWORD: ${POSTGRES_PASSWORD}
DATABASE_USER: ${DATABASE_USER}
DATABASE_PASSWORD: ${DATABASE_PASSWORD}
FUSIONAUTH_MEMORY: ${FUSIONAUTH_MEMORY}
FUSIONAUTH_SEARCH_SERVERS: http://es_search:9200
FUSIONAUTH_URL: http://fusionauth:9010
networks:
- postgres_db
- es_search
restart: unless-stopped
ports:
- ${FUSIONAUTH_PORT}:9011
volumes:
- fa_config:/usr/local/fusionauth/config
db:
container_name: db
image: mysql:5.7
volumes:
- /etc/nudjur/mysql_data:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
ports:
- ${MYSQL_PORT}:3306
command: --default-authentication-plugin=mysql_native_password
restart: on-failure
backend:
container_name: backend
links:
- db:${MYSQL_HOST}
depends_on:
- db
image: ${BACKEND_IMAGE}
volumes:
- ${ENV_FILE}:/backend/.env
ports:
- ${BACKEND_PORT}:${BACKEND_PORT}
command: >
bash -c "set -a && source .env && set +a"
restart: unless-stopped
UI:
container_name: UI
image: ${UI_IMAGE}
volumes:
- ${ENV_FILE}:/nudjur/.env
ports:
- ${UI_PORT}:${UI_PORT}
command: >
bash -c "PORT=${UI_PORT} npm start"
restart: unless-stopped
auth:
container_name: auth
network_mode: host
image: ${AUTH_IMAGE}
volumes:
- ${ENV_FILE}:/auth/.env
ports:
- ${AUTH_PORT}:${AUTH_PORT}
command: >
bash -c "set -a && source .env && set +a && python3 ./auth_bridge.py --log-level DEBUG run -p ${AUTH_PORT}"
restart: unless-stopped
networks:
postgres_db:
driver: bridge
es_search:
driver: bridge
volumes:
db_data:
es_data:
fa_config:
I am confused about how to establish communication between them.
Can someone help me with this?
I understand you want to restrict communications so that containers can only communicate with other services such as:
fusionauth should be able to contact es_search, postgres_db
backend should be able to contact auth, mysql_db
auth should be able to contact fusionauth, backend
ui should be able to contact backend, auth
You can use networks as you already partially do in your example to enable communication such as:
services on the same network can reach each other using the service name or alias - i.e. es_search can be reached by other services on the same network via http://es_search:9200
services on different network are isolated and cannot communicate with each other
You can then define your networks such as:
services:
postgres_db:
networks:
- postgres_db
es_search:
networks:
- es_search
# fusionauth should be able to contact es_search, postgres_db
fusionauth:
networks:
- fusionauth
- postgres_db
- es_search
db:
networks:
- mysql_db
# backend should be able to contact auth, mysql_db
backend:
networks:
- backend
- auth
- mysql_db
# ui should be able to contact backend, auth
UI:
networks:
- backend
- auth
# auth should be able to contact fusionauth, backend
auth:
networks:
- auth
- fusionauth
- backend
networks:
fusionauth:
backend:
auth:
postgres_db:
es_search:
mysql_db:
Where all services (except ui) have their own network, and another service must be on this service's network to communicate with it.
Note: I did not use link as it is a legacy and may be removed in future releases as stated by docs.
Delete every single last networks: and container_name: option from the docker-compose.yml file. network_mode: host as you have on the auth container is incompatible with Docker network communication, and seems to usually get presented as a workaround for port-publishing issues; delete that too. You probably want the name ui: to be in lower case as well.
When you do this, Docker Compose will create a single network named default and attach all of the containers it creates to that network. They will all be reachable by their service names. Networking in Compose in the Docker documentation describes this in more detail.
Your fusionauth container has the right basic setup. Trimming out some options:
fusionauth:
image: fusionauth/fusionauth-app:latest
depends_on:
- postgres_db
- es_search
environment:
# vvv These host names are other Compose service names
# vvv and default Compose networking makes them reachable
DATABASE_URL: jdbc:postgresql://postgres_db:5432/fusionauth
DATABASE_ET: CETERA
FUSIONAUTH_SEARCH_SERVERS: http://es_search:9200
FUSIONAUTH_URL: http://fusionauth:9010
restart: unless-stopped
ports:
- ${FUSIONAUTH_PORT}:9011
volumes:
- fa_config:/usr/local/fusionauth/config
# no networks: or container_name:
If the ui container presents something like a React or Angular front-end, remember that application runs in a browser and not in Docker, and so it will have to reach back to the physical system's DNS name or IP address and published ports:. It's common to introduce an nginx reverse proxy into this setup to serve both the UI code and the REST interface it needs to communicate with.
In principle you can set this up with multiple networks for more restrictive communications as the other answers have done. That could prevent ui from contacting postgres_db. I'd be a little surprised to see an environment where that level of control is required, and also where Compose is an appropriate deployment solution.
Also unnecessary, and frequently appearing in other questions like this, are hostname: (only sets containers' own notions of their own host names), links: (only relevant for pre-network Docker), and expose: (similarly).
from your docker-compose.yml.
postgres_db:
..
networks:
- postgres_db_net
fusionauth:
..
environment:
DATABASE_URL: jdbc:postgresql://postgres_db:5432/fusionauth
networks:
- postgres_db_net
The services postgres_db and fusionauth are attached to the network postgres_db-net which enables them to communicate with each other
communication happens by using the service-name which also works as hostname inside the container. the service fusionauth knows of the database by it's name postgres_db in the DATABASE_URL
I would like to build a docker landscape. I use a container with a traefik (v2. 1) image and a mysql container for multiple databases.
traefik/docker-compose.yml
version: "3.3"
services:
traefik:
image: "traefik:v2.1"
container_name: "traefik"
restart: always
command:
- "--log.level=DEBUG"
- "--api=true"
- "--api.dashboard=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=proxy"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--entrypoints.traefik-dashboard.address=:8080"
- "--certificatesresolvers.devnik-resolver.acme.httpchallenge=true"
- "--certificatesresolvers.devnik-resolver.acme.httpchallenge.entrypoint=web"
#- "--certificatesresolvers.devnik-resolver.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
- "--certificatesresolvers.devnik-resolver.acme.email=####"
- "--certificatesresolvers.devnik-resolver.acme.storage=/letsencrypt/acme.json"
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- "./letsencrypt:/letsencrypt"
- "./data:/etc/traefik"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
- "proxy"
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`devnik.dev`)"
- "traefik.http.routers.traefik.entrypoints=traefik-dashboard"
- "traefik.http.routers.traefik.tls.certresolver=devnik-resolver"
#basic auth
- "traefik.http.routers.traefik.service=api#internal"
- "traefik.http.routers.traefik.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.usersfile=/etc/traefik/.htpasswd"
#Docker Networks
networks:
proxy:
database/docker-compose.yml
version: "3.3"
services:
#MySQL Service
mysql:
image: mysql:5.7
container_name: mysql
restart: always
ports:
- "3306:3306"
volumes:
#persist data
- ./mysqldata/:/var/lib/mysql/
- ./init:/docker-entrypoint-initdb.d
networks:
- "mysql"
environment:
MYSQL_ROOT_PASSWORD: ####
TZ: Europe/Berlin
#Docker Networks
networks:
mysql:
driver: bridge
For the structure I want to control all projects via multiple docker-compose files. These containers should run on the same network as the traefik container and some with the mysql container.
This also works for the following case (but only sometimes)
dev-releases/docker-compose.yml
version: "3.3"
services:
backend:
image: "registry.gitlab.com/devnik/dev-releases-backend/master:latest"
container_name: "dev-releases-backend"
restart: always
volumes:
#laravel logs
- "./logs/backend:/app/storage/logs"
#cron logs
- "./logs/backend/cron.log:/var/log/cron.log"
labels:
- "traefik.enable=true"
- "traefik.http.routers.dev-releases-backend.rule=Host(`dev-releases.backend.devnik.dev`)"
- "traefik.http.routers.dev-releases-backend.entrypoints=websecure"
- "traefik.http.routers.dev-releases-backend.tls.certresolver=devnik-resolver"
networks:
- proxy
- mysql
environment:
TZ: Europe/Berlin
#Docker Networks
networks:
proxy:
external:
name: "traefik_proxy"
mysql:
external:
name: "database_mysql"
As soon as I restart the containers in dev-releases/ via docker-compose up -d I get the typical error "Gateway timeout" when calling them in the browser.
As soon as I comment the network networks: #- mysql and restart the docker-compose in dev-releases/ it works again.
My guess is that I have not configured the external networks correctly. Is it not possible to use 2 external networks?
I'd like some container have access to the 'mysql' network but it should not be accessible for the whole traefik network.
Let me know if you need more information
EDIT (26.03.2020)
I make it running.
I put all my containers into one network "proxy". It seems mysql also have to be in the proxy network.
So I add following to database/docker-compose.yml
networks:
proxy:
external:
name: "traefik_proxy"
And removed the database_mysql network out of dev-releases/docker-compose.yml
based on the names of the files, your mysql network should be mysql_mysql.
you can verify this by executing
$> docker network ls
You are also missing a couple of labels for your services such as
traefik command line
- '--providers.docker.watch=true'
- '--providers.docker.swarmMode=true'
labels
- traefik.docker.network=proxy
- traefik.http.services.dev-releases-backend.loadbalancer.server.port=yourport
- traefik.http.routers.dev-releases-backend.service=mailcatcher
You can check this for more info
I have a docker-compose.yaml file as below and I want to make sure that port 6379 on the server is not exposed to the internet (just to the first container "web" mentioned).
If I just remove the "expose" link from the "redis:" section, will that keep my redis working internally but block it from being accessed from outside?
version: '2'
services:
web:
image: myimage/version1:1.4.5
restart: always
ports:
- 8082:3000
container_name: web
networks:
- web
- default
expose:
- '3000'
labels:
- 'traefik.docker.network=web'
- 'traefik.enable=true'
- 'traefik.basic.frontend.rule=Host:abcd.com'
- 'traefik.basic.port=3000'
- 'traefik.basic.protocol=http'
depends_on:
- redis
redis:
image: redis:4.0.5-alpine
restart: always
ports:
- 6379:6379
expose:
- 6379
command: ["redis-server", "--appendonly", "yes"]
hostname: redis
networks:
- web
volumes:
- redis-data:/data
networks:
web:
external: true
volumes:
redis-data:
Expose makes the port accessible only to linked services which is what you want. You should remove the redis ports so the port will not be bound to the host.