acessing docker container ip with its name in nginx - docker

I'm running one container nginx_cont for frontend and one container web_cont for backend.
I would like my nginx in my frontend container to reach my backend container by its name with
proxy_pass http://web_cont:8000;
I've tried with the container ip and it is working. I 've tried with the name web_cont_1 as my docker-compose is adding number. And i've tried with web_cont_1.spa_network as I have specified networks: spa_network: in my docker-compose.yml.
I get the error :
nginx_cont_1 | 2022/06/01 12:00:23 [emerg] 1#1: host not found in upstream "web_cont_1" in /etc/nginx/conf.d/nginx.conf:12
Note that when i do docker-compose run containername commande, i get the error
ERROR: No such service: containername
containername is the name i get when i run docker-compose ps
Any hints?
this is my docker-compose.yml
version: '3.7'
services:
nginx_cont:
build:
context: .
dockerfile: ./compose/production/nginx/Dockerfile
restart: always
volumes:
- staticfiles:/app/static
- mediafiles:/app/media
ports:
- 80:80
- 3000:3000
- 6006:6006
depends_on:
- web_cont
networks:
spa_network:
web_cont:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
restart: always
command: /start
volumes:
- staticfiles:/app/static
- mediafiles:/app/media
- sqlite_db:/app/db
ports:
- 8000:8000
env_file:
- ./env/prod-sample
networks:
spa_network:
ipv4_address: 172.20.128.2
networks:
spa_network:
ipam:
config:
- subnet: 172.20.0.0/16
volumes:
sqlite_db:
staticfiles:
mediafiles:
Thank you

Related

How to run tests from Docker in PhpStorm?

I configured PhpStorm for running tests from docker container in the IDE by clicking the Run button, but I got the following error when I run them:
Doctrine\DBAL\Exception\ConnectionException : An exception occurred in driver: SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known
from .env:
database_host: sp_mysql
database_port: null
docker-compose:
version: "3.4"
services:
nginx:
container_name: sp_nginx
image: nginx
ports:
- 8080:80
volumes:
- ./docker/nginx/conf:/etc/nginx/conf.d/:ro
- ./var/log/nginx/:/var/log/nginx:cached
- ./web:/app/web
depends_on:
- php
networks:
- internal
php:
container_name: sp_php
image: sp/php
build:
context: ./
dockerfile: ./docker/php/Dockerfile
volumes:
- ./:/app
- ~/.ssh:/root/.ssh
depends_on:
- mysql
networks:
- internal
mysql:
container_name: sp_mysql
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
volumes:
- ./docker/mysql/conf:/etc/mysql/conf.d
- mysql_data:/var/lib/mysql
ports:
- 3308:3306
networks:
- internal
networks:
internal:
volumes:
mysql_data:
But if I go directly to php container it works:
docker exec -it sp_php vendor/bin/phpunit
i setup my ide Docker/PHPUnit config by this guide:
https://www.youtube.com/watch?v=I7aGWO6K3Ho&t=240s
To solve this problem i set this in .env
host: "172.19.0.1" // docker network bridge gateway
port: "3308" // my db port from docker-compose
https://docs.docker.com/network/network-tutorial-standalone/

Traefik config with docker

I'm trying to use traefik in my docker-compose file. My php app is listening on port 8000
version: '3'
services:
traefik:
image: traefik:1.7.4
container_name: traefik-${PROJECT_NAME}
ports:
- ${TRAEFIK_PORT}:80
- ${TRAEFIK_PORT_HTTPS}:443
- ${TRAEFIK_DASHBOARD_PORT}:8080
volumes:
- ./traefik/traefik.toml:/etc/traefik/traefik.toml
- /var/run/docker.sock:/var/run/docker.sock
networks:
- webgateway
php-fpm:
build:
context: .
dockerfile: Dockerfile-php
container_name: php-fpm-${PROJECT_NAME}
ports:
- 8000
working_dir: /var/www/html/
volumes:
- ../app:/var/www/html
tty: true
env_file:
- ./.env
entrypoint: /entrypoint.sh
networks:
- traefik
networks:
webgateway:
driver: bridge
traefik:
external:
name: traefik_webgateway
volumes:
data-volume: {}
Trefik watch every container
[docker]
domain = "local"
watch = true
All container appear in Traefik dashboard but frontend Host do not match with IP address. I can't access the app.
But when I go directly through the container IP address, it works.
Did I missed something in the configuration?
Found it. I have added host name in my /etc/hosts file.
Works fine with that

Share Docker container through local network and access to it from an another host

I try to share a container through my local network, to access this container from an another machine on the same network. I have follow tihs tutorial (section "With macvlan devices") and I succeeded to share a simple web container and access from an another host.
But the container that I want to share is a little more sophisticated, because he comminicate with other containers on the host through an internal network on the host.
I try to bind my existing container created in my docker-compose but I can't access to it. Can you help me, or tell me where I'm wrong if so please ?
This is my docker-compose :
version: "2"
services:
baseimage:
container_name: baseimage
image: base
build:
context: ./
dockerfile: Dockerfile.base
web:
container_name: web
image: web
env_file:
- .env
context: ./
dockerfile: Dockerfile.web
extra_hosts:
- dev.api.exemple.com:127.0.0.1
- dev.admin.exemple.com:127.0.0.1
- dev.www.exemple.com:127.0.0.1
ports:
- 80:80
- 443:443
volumes:
- ./code:/ass
- /var/run/docker.sock:/var/run/docker.sock
tty: true
dns:
- 8.8.8.8
- 8.8.4.4
links:
- mysql
- redis
- elasticsearch
- baseimage
networks:
devbox:
ipv4_address: 172.20.0.2
cron:
container_name: cron
image: cron
build:
context: ./
dockerfile: Dockerfile.cron
volumes:
- ./code:/ass
tty: true
dns:
- 8.8.8.8
- 8.8.4.4
links:
- web:dev.api.exemple.com
- mysql
- redis
- elasticsearch
- baseimage
networks:
devbox:
ipv4_address: 172.20.0.3
mysql:
container_name: mysql
image: mysql:5.6
ports:
- 3306:3306
networks:
devbox:
ipv4_address: 172.20.0.4
redis:
container_name: redis
image: redis:3.2.4
ports:
- 6379:6379
networks:
devbox:
ipv4_address: 172.20.0.5
elasticsearch:
container_name: elastic
image: elasticsearch:2.3.4
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- ./es_data:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
devbox:
ipv4_address: 172.20.0.6
chromedriver:
container_name: chromedriver
image: robcherry/docker-chromedriver:latest
privileged: true
ports:
- 4444:4444
environment:
- CHROMEDRIVER_WHITELISTED_IPS='172.20.0.2'
- CHROMEDRIVER_URL_BASE='wd/hub'
- CHROMEDRIVER_EXTRA_ARGS='--ignore-certificate-errors'
networks:
devbox:
ipv4_address: 172.20.0.7
links:
- web:dev.www.exemple.com
networks:
devbox:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
- subnet: 172.20.0.0/16
gateway: 172.20.0.1
Create an external network assign the external network and devbox network to web. Web would then be publicly accessible via the external network public ip address and communicate with the internal services using the devbox network.
Will post working example asap

connection refused from host to docker container

I'm trying to run a web app within a docker container. This is my docker-compose.yml
version: '2'
services:
web-app:
image: org/webapp
container_name: web-app
ports:
- "8080:8080"
expose:
- "8080"
volumes:
- ./code/source:/source
command: tail -f /dev/null
postgres:
image: postgres:9.5
container_name: local-postgres9.5
volumes_from:
- postgres-data
postgres-data:
image: busybox
container_name: postgres9.5-data
volumes:
- /var/lib/postgresql/data
When I run
docker-compose up -d
I'm able to connect to the web app from within the container with a curl command. When I try to connect from the host, I get a connection refused error

docker-compose - networks - /etc/hosts is not updated

I am using Docker version 1.12.3 and docker-compose version 1.8.1. I have some services which contains for example elasticsearch, rabbitmq and a webapp
My problem is that a service can not access another service by its host becuase docker-compose does not put all service hots in /etc/hosts file. I don't know their IP's because it is defined on docker-compose up phase.
I use networks feature as it is described at https://docs.docker.com/compose/networking/ instead of links because I do circular reference and links doesn't support it. But using networks does not put all services hosts to each service nodes /etc/hosts file. I set container_name, I set hostname but nothing happened. What I am missing;
Here is my docker-compose.yml;
version: '2'
services:
elasticsearch1:
image: elasticsearch:5.0
container_name: "elasticsearch1"
hostname: "elasticsearch1"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='Ned Stark' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
ports:
- "9200:9200"
- "9300:9300"
networks:
- webapp
elasticsearch2:
image: elasticsearch:5.0
container_name: "elasticsearch2"
hostname: "elasticsearch2"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='Daenerys Targaryen' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
networks:
- webapp
elasticsearch3:
image: elasticsearch:5.0
container_name: "elasticsearch3"
hostname: "elasticsearch3"
command: "elasticsearch -E cluster.name=GameOfThrones -E node.name='John Snow' -E discovery.zen.ping.unicast.hosts=elasticsearch1,elasticsearch2,elasticsearch3"
volumes:
- "/opt/elasticsearch/data"
networks:
- webapp
rabbit1:
image: harbur/rabbitmq-cluster
container_name: "rabbit1"
hostname: "rabbit1"
environment:
- ERLANG_COOKIE=abcdefg
networks:
- webapp
rabbit2:
image: harbur/rabbitmq-cluster
container_name: "rabbit2"
hostname: "rabbit2"
environment:
- ERLANG_COOKIE=abcdefg
- CLUSTER_WITH=rabbit1
- ENABLE_RAM=true
networks:
- webapp
rabbit3:
image: harbur/rabbitmq-cluster
container_name: "rabbit3"
hostname: "rabbit3"
environment:
- ERLANG_COOKIE=abcdefg
- CLUSTER_WITH=rabbit1
networks:
- webapp
my_webapp:
image: my_webapp:0.2.0
container_name: "my_webapp"
hostname: "my_webapp"
command: "supervisord -c /etc/supervisor/supervisord.conf -n"
environment:
- DYNACONF_SETTINGS=settings.prod
ports:
- "8000:8000"
tty: true
networks:
- webapp
networks:
webapp:
driver: bridge
This is how I understand they can't comunicate with each other;
I get this error on elasticserach cluster initialization;
Caused by: java.net.UnknownHostException: elasticsearch3
And this is how I docker-composing
docker-compose up
If the container expects the hostname to be available immediate when the container starts that is likely why it's failing.
The hostname isn't going to exist until the other containers start. You can use an entrypoint script to wait until all the hostnames are available, then exec elasticsearch ...

Resources