Traefik config with docker - docker

I'm trying to use traefik in my docker-compose file. My php app is listening on port 8000
version: '3'
services:
traefik:
image: traefik:1.7.4
container_name: traefik-${PROJECT_NAME}
ports:
- ${TRAEFIK_PORT}:80
- ${TRAEFIK_PORT_HTTPS}:443
- ${TRAEFIK_DASHBOARD_PORT}:8080
volumes:
- ./traefik/traefik.toml:/etc/traefik/traefik.toml
- /var/run/docker.sock:/var/run/docker.sock
networks:
- webgateway
php-fpm:
build:
context: .
dockerfile: Dockerfile-php
container_name: php-fpm-${PROJECT_NAME}
ports:
- 8000
working_dir: /var/www/html/
volumes:
- ../app:/var/www/html
tty: true
env_file:
- ./.env
entrypoint: /entrypoint.sh
networks:
- traefik
networks:
webgateway:
driver: bridge
traefik:
external:
name: traefik_webgateway
volumes:
data-volume: {}
Trefik watch every container
[docker]
domain = "local"
watch = true
All container appear in Traefik dashboard but frontend Host do not match with IP address. I can't access the app.
But when I go directly through the container IP address, it works.
Did I missed something in the configuration?

Found it. I have added host name in my /etc/hosts file.
Works fine with that

Related

Docker containers unable to comunicate

I have 2 containers that belongs to the same network:
version: '3'
services:
#PHP Service
app:
build:
context: ./website
dockerfile: Dockerfile
image: travellist
container_name: app
restart: unless-stopped
depends_on:
- db
tty: true
...
networks:
- app-network
administration:
build:
dockerfile: Dockerfile
image: travellist
container_name: administration
restart: unless-stopped
depends_on:
- db
tty: true
environment:
....
networks:
- app-network
#Nginx Service
webserver:
container_name: webserver
image: nginx:1.17-alpine
restart: unless-stopped
depends_on:
- db
ports:
- 8000:80
- 7999:81
...
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
as you can see the two applications runs over NGINX over 2 different ports... however, I'm unable to send a request from one application to the other one... non of the following works (from administration, that is the one that works over 81:7999):
localhost:80
localhost:8000
app:80
app:8000
From the administration container you should send your request to the webserver on port 80.
From the administration container, you can first check that you can ping the webserver, if it succeeds it means that the two can reach each other on the network and for this reason, you can execute your request.
Please note that the port 8000 is only exposed to the host machine.

docker does not expose static ip for container

i am trying to bind container with nginx to ip 172.16.238.10 , but for some reason docker ignores settings in docker-compose
#my docker-compose file
version: "3.9"
services:
nginx:
build: nginx/
ports:
- 80:80/tcp
volumes:
./dokcer/nginx/conf/nginx.conf:/etc/nginx/nginx.conf
./dokcer/nginx/conf/hosts:/etc/hosts
./docker/project:/var/www/project
networks:
app_net:
ipv4_address: 172.16.238.10
php-fpm:
build: php-fpm/
ports:
- 9000:9000/tcp
volumes:
./dokcer/php-fpm/conf/www.conf:/usr/local/etc/php-fpm.d/www.conf
./dokcer/php-fpm/conf/hosts:/etc/hosts
./dokcer/project:/var/www/project
networks:
app_net:
ipv4_address: 172.16.238.11
networks:
app_net:
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"
after build and launch, I look at the docker_app_net network and see that the nginx container has ip 172.16.238.2 although I expected it to be 172.16.238.10.
what could be the problem? I will be grateful for every answer because I am confused :c

Connect to database from another container

Please help me if it's possible.
I need to start 2 applications with a single database.
I have 2 applications. First domain.com, 2-nd api.domain.com. Each application has docker-compose.yaml files.
domain.com - CMS
version: "3.8"
services:
web:
container_name: domain_web
build:
context: ./docker/php
dockerfile: Dockerfile
working_dir: /var/www/html
#command: composer install
volumes:
- ./:/var/www/html
- ./docker/php/app.conf:/etc/apache2/sites-available/000-default.conf
- ./docker/php/hosts:/etc/hosts
networks:
domain:
ipv4_address: 10.9.0.5
networks:
domain:
driver: bridge
ipam:
config:
- subnet: 10.9.0.0/16
gateway: 10.9.
volumes:
bel_baza:
api.domain.com - Laravel 5.6
version: "3.8"
services:
web:
container_name: api_domain_web
build:
context: ./docker/php
dockerfile: Dockerfile
working_dir: /var/www/html
# command: composer install
volumes:
- ./:/var/www/html
- ./docker/php/app.conf:/etc/apache2/sites-available/000-default.conf
- ./docker/php/hosts:/etc/hosts
networks:
api_domain:
ipv4_address: 10.15.0.5
db:
image: mysql:5.7
command: --default-authentication-plugin=mysql_native_password
restart: always
container_name: api_domain_db
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: domain
MYSQL_USER: user
MYSQL_PASSWORD: user
volumes:
- api_domain_baza:/var/lib/mysql
- ./docker/db:/docker-entrypoint-initdb.d
networks:
api_domain:
ipv4_address: 10.15.0.6
phpmyadmin:
image: phpmyadmin
restart: always
container_name: api_domain_pma
networks:
api_domain:
ipv4_address: 10.15.0.7
redis:
image: redis:3.0
container_name: api_domain_redis
networks:
api_domain:
ipv4_address: 10.15.0.10
networks:
api_domain:
driver: bridge
ipam:
config:
- subnet: 10.15.0.0/16
gateway: 10.15.0.1
volumes:
api_domain_baza:
api_domain started successfully.
I need to connect domain.com with database api_domain_db. For connecting host, I used IP address 10.15.0.6. First application not connected to the database from 2nd application.
What is my problem?
How I can connect domain.com with the database of 2nd application?
your problem is your are using separate docker compose applications for each application. And by default those applications can not access each other inner parts:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
doc is here - https://docs.docker.com/compose/networking/
So it is like it creates separate network for each docker compose.
if you want them both to see each other inner part you can create external docker network as this:
docker network create --subnet 10.1.0.0/24 network_name
and then use that network in both docker compose like this:
networks:
default:
external:
name: network_name
services:
.....
if you need fixed IPs, you can define them as
app:
image: ...
networks:
default:
ipv4_address: 10.1.0.10

Docker bind host to container

I have two containers nginx and php how to configure docker-compose so when I make wget http://example.com from inside php container this host should point to nginx container
Map your nginx port to host port.
If you host has name example.com and nginx runs on 8080 port, set up your docker-compose like
nginx:
image: nginx
hostname: nginx
ports:
- "8080:80"
In this case request to http://example.com will be really executed as http://nginx:8080.
I've specified static ips for my containers in docker-compose.yml and added extra_host:
services:
nginx:
image: nginx
ports:
- "8200:80"
- "8201:443"
volumes:
- .:/var/www/html
networks:
test:
ipv4_address: 10.5.0.5
php:
build: ./docker/php
volumes:
- .:/var/www/html
environment:
APP_ENV: "dev"
networks:
test:
ipv4_address: 10.5.0.6
extra_hosts:
- "example.com:10.5.0.5"
networks:
test:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16

Docker created with docker-compose not visible from outside server

I've created my docker-compose file with 3 dockerfile attached. Everything is working but currently I'd like to expose outside port 8000.
This is not happening. The host is unreachable :(
What's wrong with this?
version: '3'
services:
elastic:
build: ./elastic
ports:
- 5500:80
tty: true
networks:
- default
api:
build: ./api
ports:
- 5000:80
depends_on:
- elastic
tty: true
networks:
- default
web:
build: ./web
ports:
- 8000:80
depends_on:
- api
tty: true
networks:
- outside
- default
networks:
outside:
external:
name: docker_gwbridge
I had a similar issue with an app running on port other than 80/443. I deployed the app on AWS EC2, and the host could not be reached. In order to make it visible, I had to add an inbound rule in "Security Groups" of EC2 instance, which exposed other ports (8000 in my case).

Resources