Make Traefik see containers from other networks? - docker

I'm trying to make Traefik notice a container that belongs to a different network. Consider the following docker-compose.yml, which is the only file in that directory:
version: '3.7'
services:
traefik:
image: "traefik:v2.1"
container_name: "traefik"
hostname: "traefik"
ports:
- "80:80"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
command:
- '--providers.docker.defaultRule=Host(`{{ index .Labels "com.docker.compose.service" }}.docker.localhost`)'
- '--providers.docker.exposedbydefault=false'
- '--entrypoints.web.address=:80'
And the following project, sitting in a directory flask:
flask/docker-compose.yml
version: '3.7'
services:
flaskapp:
container_name: flaskapp
build: flask_app
expose:
- 5000
labels:
traefik.enable: true
traefik.docker.network: traefik_default
traefik.http.routers.flaskapp.rule: Host(`flaskapp.localhost`)
traefik.http.routers.flaskapp.entrypoints: web
flask/flask_app/Dockerfile
FROM python:3.8
RUN python3.8 -m pip install flask
ADD ./main.py .
EXPOSE 5000
ENTRYPOINT ./main.py
flask/flask_app/main.py
#!/usr/bin/env python3.8
import flask
app = flask.Flask(__name__)
#app.route('/')
def main():
return "hello, world"
app.run(host='0.0.0.0')
I basically did sudo docker-compose up in both of the directories and found that flaskapp.localhost receives the connection, but then times out. So I tried sudo docker network connect traefik_default flaskapp to connect flaskapp to the traefik_default network, but this didn't seem change anything. Why doesn't sudo docker network connect traefik_default flaskapp help? Is there a way to make Traefik see containers from all networks without plugging it in to theirs?

It appears that one way to make Traefik able to access all docker-compose networks is to run it in --net host mode, by adding network_mode: "host" to its flags. Here's the modified docker-compose.yml:
version: '3.7'
services:
traefik:
image: "traefik:v2.1"
container_name: "traefik"
hostname: "traefik"
network_mode: "host"
ports:
- "80:80"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
command:
- '--providers.docker.defaultRule=Host(`{{ index .Labels "com.docker.compose.service" }}.docker.localhost`)'
- '--providers.docker.exposedbydefault=false'
- '--entrypoints.web.address=:80'
I don't unfortunately understand the security implications of this setup and thus can't tell if it's fit for production usage, but it seems to solve this particular problem.

Related

Docker communication inside docker compose and with database which is outside docker

I'm little bit confused with docker and network communication. I tried many things but it didn't work :-(.
I have following docker compose:
version: '3'
services:
nginx:
container_name: nginx
image: nginx:stable-alpine
restart: unless-stopped
tty: true
ports:
- 80:80
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
depends_on:
- app
networks:
- frontend
- backend
app:
restart: unless-stopped
tty: true
build:
context: .
dockerfile: Dockerfile
container_name: app
expose:
- "9090"
ports:
- 9090:9090
networks:
- backend
networks:
frontend:
backend:
And I would like to communicate:
From nginx to app //this probably works
From app to postgreSQL which is installed on server (no docker container)
I cannot do this, I tried many things but something is wrong :-(
You can choose any of these two options:
Make your postgresql listen to all your network interfaces (or the docker bridge for more secure but complex setup), to achieve that you need to make sure your config looks like this:
# grep listen /var/lib/pgsql/data/postgresql.conf
listen_addresses = '*'
Use host network mode in your docker compose, which runs docker in your host network name space instead of creating a new network:
network_mode: "host"

Docker-Compose: How to depends_on a container on another network? I am getting an error saying container 'undefined' even though networks are linked

I have 2 different services in 2 distinct docker-compose.yml files in 2 different locations.
Service 1: wordpress
version: "3.7"
services:
# Wordpress
wordpress:
depends_on:
- db
container_name: wordpress
image: wordpress:latest
ports:
- '8000:80'
restart: unless-stopped
volumes: ['./:/var/www/html']
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
networks:
- wpsite
networks:
wpsite:
driver: bridge
Service 2: frontend
version: "3.7"
services:
frontend:
depends_on:
- wordpress
container_name: frontend
restart: unless-stopped
stdin_open: true
build:
context: ../realm-frontend
volumes:
- static:/realm-frontend/build
networks:
- cms_wpsite
networks:
cms_wpsite:
external: true
I have a shell script that goes to the 2 locations and runs the docker-compose files to create the containers.
Script
cd ~/cms;
docker-compose -f docker-compose.yml up -d --build --force-recreate
cd ../frontend;
docker-compose -f docker-compose.yml up -d --build --force-recreate
As you can see I have created the link between the networks using bridge.
When I docker network inspect {network id} I can see both the containers wordpress and frontend are in the network. However, when the second container is created with the depends_on command, I get the following error.
ERROR: Service 'frontend' depends on service 'wordpress' which is undefined.
I am not sure why this is, given that they're in the same network.
I'd appreciate any help. Thanks!
Depends_on only works on services within the same compose file, so to do what you want, you would need to use something like wait-for-it.sh. Take a look here for more information: https://docs.docker.com/compose/startup-order/
Something like this may work for you or you can create a custom wait-for-it script as well:
services:
frontend:
container_name: frontend
restart: unless-stopped
stdin_open: true
build:
context: ../realm-frontend
volumes:
- static:/realm-frontend/build
command: ["./wait-for-it.sh", "wordpress:80", "--", "yourfrontendcmd"]
networks:
- cms_wpsite
I think you misunderstanding there.
depends_on: only works in a docker-compose file and only says which order to start and stop a container.
https://docs.docker.com/compose/compose-file/#depends_on

Deploy containers from different docker-compose.yml

Currently I have a rabbitmq message broker and multiple celery workers that need to be containerized. My problem is, how can I fire up containers using different docker-compose.yml? My goal is to start the rabbitmq once and for all, and never touch it again.
Currently I have a docker-compose.yml for the rabbitmq:
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "5672:5672"
expose:
- "5672"
And another docker-compose.yml for celery workers:
version: '2'
services:
worker:
build:
context: .
dockerfile: dockerfile
volumes:
- .:/app
environment:
- CELERY_BROKER_URL=amqp://admin:mypass#rabbit:5672
links:
- rabbit
However, when I do docker-compose up for celery workers, I keep getting the following error:
ERROR/MainProcess] consumer: Cannot connect to
amqp://admin:**#rabbit:5672//: failed to resolve broker hostname.
Can anyone take a look if there is anything wrong with my code? Thanks.
the domain name rabbit in your second docker-compose.yml file does not resolve because there is no service with that name in that docker-compose.yml file.
As stated in the comments, one solution is to put both the rabbit service and worker service in the same docker-compose.yml file. In such a setup, all containers started for those services would join the same docker network and those service names could be resolved to the IP adresses of their containers.
Since having a single docker-compose.yml file is not convenient in your case, you have to find an other way to have the containers originating from different docker-compose.yml files join a same docker network.
To do so, you need to create a dedicated docker network for that purpose:
docker network create rabbitNetwork
Then, in each docker-compose.yml file, you need to refer to this network in the services definitions:
version: '2'
services:
rabbit:
hostname: rabbit
image: rabbitmq:latest
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
# ports:
# - "5672:5672" # there is no need to publish ports on the docker host anymore
expose:
- "5672"
networks:
- rabbitNet
networks:
rabbitNet:
external:
name: rabbitNetwork
version: '2'
services:
worker:
build:
context: .
dockerfile: dockerfile
volumes:
- .:/app
environment:
- CELERY_BROKER_URL=amqp://admin:mypass#rabbit:5672
networks:
- rabbitNet
networks:
rabbitNet:
external:
name: rabbitNetwork
You can use any file as service definition.
docker-compose.yml is default file name but any other name can be passed using -f argument.
docker-compose -f rabbit-compose.yml COMMAND

Docker Compose Flags

I don't how to run the docker-compose equivalent of my code
docker run -d --name=server --restart=always --net network --ip 172.18.0.5 -p 5003:80 -v $APP_PHOTO_DIR:/app/mysql-data -v $APP_CONFIG_DIR:/app/config webserver
I've done this:
version: '3'
services:
server:
image: app-dependencies
ports:
- "5003:80"
volumes:
- ./app:/app
command: python /app/app.py
restart: always
networks:
app_net:
ipv4_address: 172.18.0.5
Are you sure you need an IP address for container? It is not recommended practice, why do you want to set it explicitly?
docker-compose.yml
version: '3'
services:
server: # correct, this would be container's name
image: webserver # this should be image name from your command line
ports:
- "5003:80" # correct, but only if you need to communicate to service from ouside
volumes: # volumes just repeat you command line, you can use Env vars
- $APP_PHOTO_DIR:/app/mysql-data
- $APP_CONFIG_DIR:/app/config
command: ["python", "/app/app.py"] # JSON notation strongly recommended
restart: always
Then docker-compose up -d and that's it. You can access your service from host with localhost:5003, no need for internal IP.
For networks, I always include in the docker-compose file, the network specification. If the network already exists, docker will not create a new one.
version: '3'
services:
server:
image: app-dependencies
ports:
- "5003:80"
volumes:
- ./app:/app
command: python /app/app.py
restart: always
networks:
app_net:
ipv4_address: 172.18.0.5
networks:
app_net:
name: NETWORK_NAME
driver: bridge
ipam:
config:
- subnet: NETWORK_SUBNET
volumes:
VOLUME_NAME:
driver:local
And you will need to add the volumes separately to match the docker run command.

Docker-compose bridge network & host remote port forwarding at the same container

I'm trying to make service that can forward remote database port to container and at the same time can be accessible by alias hostname from other containers to work with them.
I am think that make all containers communicate by host network is bad practice, so i am trying to setup that configuration.
When i am triyng to add to php-fpm service network with driver: host, docker says
only one instance of "host" network is allowed
When i am trying to set php-fpm service with this
networks
- host
Docker says that he cant find out network with this name.
When i try to define network in docker-compose by id of built-in host, it just cant start this container.
This is my docker-compose:
version: '3.2'
networks:
backend-network:
driver: bridge
frontend-network:
driver: bridge
volumes:
redis-data:
home-dir:
services:
&app-service app: &app-service-template
build:
context: ./docker/app
dockerfile: Dockerfile
volumes:
- ./src:/app:rw
- home-dir:/home/user
hostname: *app-service
environment:
FPM_PORT: &php-fpm-port 9001
FPM_USER: "${USER_ID:-1000}"
FPM_GROUP: "${GROUP_ID:-1000}"
APP_ENV: local
HOME: /home/user
command: keep-alive.sh
networks:
- backend-network
&php-fpm-service php-fpm:
<<: *app-service-template
user: 'root:root'
restart: always
hostname: *php-fpm-service
ports: [*php-fpm-port]
environment:
FPM_PORT: *php-fpm-port
FPM_USER: "${USER_ID:-1000}"
FPM_GROUP: "${GROUP_ID:-1000}"
APP_ENV: local
HOME: /home/user
entrypoint: /fpm-entrypoint.sh
command: php-fpm --nodaemonize -R -d "opcache.enable=0" -d "display_startup_errors=On" -d "display_errors=On" -d "error_reporting=E_ALL"
networks:
- backend-network
- frontend-network
nginx:
build:
context: ./docker/nginx
dockerfile: Dockerfile
restart: always
working_dir: /usr/share/nginx/html
environment:
FPM_HOST: *php-fpm-service
FPM_PORT: *php-fpm-port
ROOT_DIR: '/app/public' # App path must equals with php-fpm container path
volumes:
- ./src:/app:ro
ports: ['9999:80']
depends_on:
- *php-fpm-service
networks:
- frontend-network
Network scheme (question about green line):
Host works on Debian 7 (updates prohibited) and conainer works with lastest Alpine

Resources