Docker network configuration endpoints - docker

I want to know how to configure correctly the backend endpoint.
I have a docker images that runs different containers:
Backend
Frontend
Nginx for backend
DB
From my understanding, since all containers are running on the same machine, I should be able to reach the backend with "host.docker.internal".
Indeed I can successfully do it on the local machine where Docker is running on.
By the way the frontend is not able to resolve the endpoint "host.docker.internal" if I try to make a request from another machine. Please note that I'm able to reach the frontend from another machine, it's just a matter of endpoint configuration.
Note that "192.168.1.11" is the IP of the machine where Docker is running, and "8888" it's the port where the frontend is.
Obviously I can succesfully make the requests from other machines too if I put the static IP address instead of "host.docker.internal". But the question is: since the React frontend application is served on Docker itself, shouldn't it be able to resolve the "host.docker.internal" endpoint?
Just for reference, here it is my docker compose:
version: "3.8"
services:
db: #mysqldb
image: mysql:5.7
container_name: ${DB_SERVICE_NAME}
restart: unless-stopped
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
SERVICE_TAGS: dev
SERVICE_NAME: mysql
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- ./docker-compose/mysql:/docker-entrypoint-initdb.d
networks:
- backend
mrmfrontend:
build:
context: ./mrmfrontend
args:
- REACT_APP_API_BASE_URL=$CLIENT_API_BASE_URL
- REACT_APP_BACKEND_ENDPOINT=$REACT_APP_BACKEND_ENDPOINT
- REACT_APP_FRONTEND_ENDPOINT=$REACT_APP_FRONTEND_ENDPOINT
- REACT_APP_FRONTEND_ENDPOINT_ERROR=$REACT_APP_FRONTEND_ENDPOINT_ERROR
- REACT_APP_CUSTOMER=$REACT_APP_CUSTOMER
- REACT_APP_NAME=$REACT_APP_NAME
- REACT_APP_OWNER=""
ports:
- $REACT_LOCAL_PORT:$REACT_DOCKER_PORT
networks:
- frontend
volumes:
- ./docker-compose/nginx/frontend:/etc/nginx/conf.d/
app:
build:
args:
user: admin
uid: 1000
context: ./MRMBackend
dockerfile: Dockerfile
image: backend
container_name: backend-app
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./MRMBackend:/var/www
networks:
- backend
nginx:
image: nginx:alpine
container_name: backend-nginx
restart: unless-stopped
ports:
- 8000:80
volumes:
- ./MRMBackend:/var/www
- ./docker-compose/nginx/backend:/etc/nginx/conf.d/
networks:
- backend
- frontend
volumes:
db:
networks:
frontend:
driver: bridge
backend:
driver: bridge
The endpoint is configured in this way in the .env:
REACT_APP_BACKEND_ENDPOINT="http://host.docker.internal:8000"

Related

Traefik 2 network between 2 containers results in Gateway Timeout errors

I'm trying to set up 2 docker containers with docker-compose, 1 is a Traefik proxy and the other is a Vikunja kanban board container.
They both have their own docker-compose file. I can start the containers and the Traefik dashboard doesn't show any issues but when I open the URL in a browser I only get a Gateway Timeout error.
I have been looking at similar questions on here and different platforms and in nearly all other cases the issue was that they were placed on 2 different networks. However, I added a networks directive to the Traefik docker-compose.yml and still have this problem, unless I'm using them wrong.
The docker-compose file for the Vikunja container
(adapted from https://vikunja.io/docs/full-docker-example/)
version: '3'
services:
api:
image: vikunja/api
environment:
VIKUNJA_DATABASE_HOST: db
VIKUNJA_DATABASE_PASSWORD: REDACTED
VIKUNJA_DATABASE_TYPE: mysql
VIKUNJA_DATABASE_USER: vikunja
VIKUNJA_DATABASE_DATABASE: vikunja
VIKUNJA_SERVICE_JWTSECRET: REDACTED
VIKUNJA_SERVICE_FRONTENDURL: REDACTED
volumes:
- ./files:/app/vikunja/files
networks:
- web
- default
depends_on:
- db
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.vikunja-api.rule=Host(`subdomain.domain.de`) && PathPrefix(`/api/v1`, `/dav/`, `/.well-known/`)"
- "traefik.http.routers.vikunja-api.entrypoints=websecure"
- "traefik.http.routers.vikunja-api.tls.certResolver=myresolver"
frontend:
image: vikunja/frontend
labels:
- "traefik.enable=true"
- "traefik.http.routers.vikunja-frontend.rule=Host(`subdomain.domain.de`)"
- "traefik.http.routers.vikunja-frontend.entrypoints=websecure"
- "traefik.http.routers.vikunja-frontend.tls.certResolver=myresolver"
networks:
- web
- default
restart: unless-stopped
db:
image: mariadb:10
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
environment:
MYSQL_ROOT_PASSWORD: REDACTED
MYSQL_USER: vikunja
MYSQL_PASSWORD: REDACTED
MYSQL_DATABASE: vikunja
volumes:
- ./db:/var/lib/mysql
restart: unless-stopped
command: --max-connections=1000
networks:
- web
networks:
web:
external: true
The network directives for the api and frontend services in the Vikunja docker-compose.yml were present in the template (I added one for the db service for testing but it didn't have any effect).
networks:
- web
After getting a docker error about the network not being found I created it via docker network create web
The docker-compose file for the Traefik container
version: '3'
services:
traefik:
image: traefik:v2.8
ports:
- "80:80"
- "443:443"
- "8080:8080" # dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./letsencrypt:/letsencrypt
- ./traefik.http.yml:/etc/traefik/traefik.yml
networks:
- web
networks:
web:
external: true
I've tried adding the Traefik service to the Vikunja docker-compose.yml in one file but that didn't have any effect either.
I'm thankful for any pointers.
For debugging you could try to configure all container to use the host network to enusre they are realy on the same netwok.
i had a similar issue trying to run two different dockers and getting a
"Gateway Timeout". My issue was solved after changing the mapping in the second docker for traefik and accessing the site with :84 at the end (http://sitename:84)
traefik:
image: traefik:v2.0
container_name: "${PROJECT_NAME}_traefik"
command: --api.insecure=true --providers.docker
ports:
- '84:80'
- '8084:8080'

Docker containers unable to comunicate

I have 2 containers that belongs to the same network:
version: '3'
services:
#PHP Service
app:
build:
context: ./website
dockerfile: Dockerfile
image: travellist
container_name: app
restart: unless-stopped
depends_on:
- db
tty: true
...
networks:
- app-network
administration:
build:
dockerfile: Dockerfile
image: travellist
container_name: administration
restart: unless-stopped
depends_on:
- db
tty: true
environment:
....
networks:
- app-network
#Nginx Service
webserver:
container_name: webserver
image: nginx:1.17-alpine
restart: unless-stopped
depends_on:
- db
ports:
- 8000:80
- 7999:81
...
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
as you can see the two applications runs over NGINX over 2 different ports... however, I'm unable to send a request from one application to the other one... non of the following works (from administration, that is the one that works over 81:7999):
localhost:80
localhost:8000
app:80
app:8000
From the administration container you should send your request to the webserver on port 80.
From the administration container, you can first check that you can ping the webserver, if it succeeds it means that the two can reach each other on the network and for this reason, you can execute your request.
Please note that the port 8000 is only exposed to the host machine.

How to use docker volume with docker image from dockerhub

Deploy my project using docker compose. At the moment, my project is being built locally, from local folders with projects. Docker uses Laravel service, VueJS service, NGINX service, Mysql service and Redis service.
I set up a docker compose from an article from digital ocean. The question is as follows. In the article, it was necessary to create volumes (volumes) that linked the files of my application locally with the files of the container, but what should I do if I want to build my project using DockerHub, after all, I will no longer have the files stored locally.
I tried without these volumes in the backend and webserver, but then without any errors during the assembly, I cannot reach the server via the localhost.
Do I just need to push the image right away with the volume? Here is my compose.yml:
version: '3'
services:
#VueJS Service
frontend:
build: ./frontend
container_name: frontend
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: frontend
working_dir: /var/www/frontend
ports:
- "3000:80"
volumes:
- ./frontend/:/var/www/frontend
- ./frontend/node_modules:/var/www/frontend/node_modules
networks:
- backend-network
#PHP Service
backend:
build: ./backend
container_name: backend
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: backend
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./backend/:/var/www
- ./backend/php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- backend-network
#Nginx Service
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "443:443"
- "8080:8080"
volumes:
- ./backend/:/var/www
- ./backend/nginx/conf.d/:/etc/nginx/conf.d/
networks:
- backend-network
#MySQL Service
db:
image: mysql:5.7.22
container_name: db
restart: unless-stopped
tty: true
ports:
- "33062:3306"
environment:
MYSQL_DATABASE: qcortex
MYSQL_ROOT_PASSWORD: kc3wcfjk5
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- dbdata:/var/lib/mysql
- ./backend/mysql/my.cnf:/etc/mysql/my.cnf
networks:
- backend-network
#Redis
redis:
image: caster977/redis
restart: unless-stopped
container_name: redis
networks:
- backend-network
#Docker Networks
networks:
backend-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local

Connection between docker containers as localhost

i am trying to dockerize my web application. i am running a apache webserver + mariadb and redis server as you can see in my docker-compose file combined with an nginx proxy to use local domains and ssl.
everything works fine as long is i use the container names to connect to mysql / redis. I dont want to change all localhosts in my code to the mysql / redis container names.
Is there a way to keep "localhost" as Host instead of the containers name?
version: "3.5"
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: portal-proxy
networks:
- portal
ports:
- "80:80"
- "443:443"
volumes:
- ./certs:/etc/nginx/certs
- /var/run/docker.sock:/tmp/docker.sock:ro
portal:
image: portal:latest
container_name: portal-webserver
networks:
- portal
volumes:
- ./portal:/var/www/html/portal
links:
- db
restart: always
environment:
VIRTUAL_HOST: portal.dev
db:
image: mariadb:latest
container_name: portal-db
networks:
- portal
ports:
- "3306:3306"
restart: always
environment:
MYSQL_DATABASE: portal
MYSQL_USER: www-data
MYSQL_PASSWORD: www-data
MYSQL_ROOT_PASSWORD: asdf1234
volumes:
- ./db:/docker-entrypoint-initdb.d
- ./db:/var/lib/mysql
redis:
image: redis:latest
container_name: portal-redis
environment:
- ALLOW_EMPTY_PASSWORD=yes
networks:
- portal
ports:
- "6379:6379"
networks:
portal:
name: portal
Use a common hostname (staging.docker.host) on all containers, that resolves to the docker host's ip 1.2.3.4.
So adding this to containers:
extra_hosts:
- "staging.docker.host:1.2.3.4"
and use that name (staging.docker.host) in all you connection endpoints.
On you local machine you also add (staging.docker.host) to your /etc/hosts or C:\Windows\System32\drivers\etc\hosts with localhost 127.0.0.1 staging.docker.host.

Docker: container state exit 0

i have a problem with docker container.
That's my docker-compose file with 5 services
version: '3'
networks:
laravel:
services:
nginx:
image: nginx:stable-alpine
container_name: nginx
ports:
- "8088:80"
volumes:
- ./src:/var/www/html
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- mysql
- php
networks:
- laravel
mysql:
image: mysql:5.7.22
container_name: mysql
restart: unless-stopped
tty: true
ports:
- "4306:3306"
environment:
MYSQL_DATABASE: homestead
MYSQL_USER: homestead
MYSQL_PASSWORD: secret
MYSQL_ROOT_PASSWORD: secret
SERVICE_TAGS: dev
SERVICE_NAME: mysql
networks:
- laravel
php:
build:
context: .
dockerfile: Dockerfile
container_name: php
volumes:
- ./src:/var/www/html
ports:
- "9000:9000"
networks:
- laravel
redis:
image: redis:5.0.0-alpine
restart: always
container_name: redis
ports:
- "6379:6379"
networks:
- laravel
composer:
image: composer:latest
container_name: composer
volumes:
- ./src:/var/www/html
tty: true
working_dir: /var/www/html
networks:
- laravel
then i run
docker-compose up -d
and then
docker-compose ps
to see my container and i always get the composer contaier down with code 0. that's the screenshot
:
can someone explain me why i can't put this container up. Thanks a lot
composer isn't a program that stays alive. It's a program that does specific some work and then exits.
There's not much purpose in keeping it "up", since it's not going to do anything like the other processes do (nginx intercepts web traffic and writes response, mysql accepts database connections and reads/writes from a database, php serves web content, redis can be connected to as a cache).

Resources