Cannot curl/ping to the host ip from inside docker container - docker

I used docker and have nginx as a reverse proxy with just 80 and 443 ports exposed on host machine. I also have other containers and nginx under same network with bridge driver. The problem is I am unable to curl/ping host machine IP from inside docker container. I can access other containers with local DNS though. Please help.
DOCKER-COMPOSE FILE
version: "3"
services:
mongo:
image: mongo
container_name: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=rootmongo
- MONGO_INITDB_ROOT_PASSWORD=p4Ss#1234#
# For Persistance
volumes:
- db_data:/data/db
- db_backup:/backup
- db_dump:/dump
restart: always
networks:
- app-network
wealth_advisor_admin:
build:
context: ./adminpanel
dockerfile: Dockerfile
image: wealth_advisor_admin
depends_on:
- mongo
restart: unless-stopped
container_name: wealth_advisor_admin
networks:
- app-network
wealth_advisor_web:
build:
context: .
dockerfile: Dockerfile
image: wealth_advisor_web
container_name: wealth_advisor_web
restart: unless-stopped
depends_on:
- mongo
networks:
- app-network
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- dhparam:/etc/ssl/certs
depends_on:
- wealth_advisor_web
- wealth_advisor_admin
networks:
- app-network
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- wealth_advisor_web
command: ---------------
volumes:
certbot-etc:
web-root:
certbot-var:
db_data:
db_backup:
db_dump:
dhparam:
driver: local
driver_opts:
type: none
device: /root/trade/dhparam/
o: bind
networks:
app-network:
driver: bridge

Related

Make call from one docker container to another one

So I have two different applications one wordpress and other is api. And both running on docker containers and have their own configurations. This is their docker-compose settings:
version: "3.8"
services:
app:
container_name: ${APP_NAME}_app
build:
context: .
dockerfile: ./.docker/php/Dockerfile
expose:
- 9000
volumes:
- .:/usr/src/app
- ./public:/usr/src/app/public
depends_on:
- db
networks:
- app_network
nginx:
container_name: ${APP_NAME}_nginx
build:
context: .
dockerfile: ./.docker/nginx/Dockerfile
volumes:
- ./public:/usr/src/app/public
ports:
- "8081:8081"
expose:
- 8081
environment:
NGINX_FPM_HOST: app
NGINX_ROOT: /usr/src/app/public
depends_on:
- app
networks:
- app_network
db:
container_name: ${APP_NAME}_db
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
networks:
- app_network
networks:
app_network:
driver: bridge
volumes:
db_data:
driver: local
And this is my wordpress configuration:
version: '3.8'
services:
mysql:
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=somewordpress
- MYSQL_DATABASE=wordpress
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=wordpress
expose:
- 3306
- 33060
healthcheck:
test: mysqladmin ping -h 127.0.0.1 -u $$MYSQL_USER --password=$$MYSQL_PASSWORD
interval: 1s
timeout: 3s
retries: 30
networks:
- app_network
wordpress:
build:
context: .
dockerfile: Dockerfile
depends_on:
mysql:
condition: service_healthy
volumes:
- .:/var/www/html/wp-content/plugins/name
ports:
- "80:80"
restart: always
environment:
- WORDPRESS_URL=http://localhost
- WORDPRESS_DB_HOST=mysql
- WORDPRESS_DB_USER=wordpress
- WORDPRESS_DB_PASSWORD=wordpress
- WORDPRESS_DB_NAME=wordpress
networks:
- app_network
networks:
app_network:
driver: bridge
And when I try to make request to api to this URL http://localhost:8081 well nothing happens. Locally works everything fine but on docker it doesn't.
Would appreciate some help how to make this work :)
If you have a docker container and you do http://localhost:8081, you wont go to your host pc, but to the container itself.
In docker-compose you need to replace localhost with the service name:
For example if you want to access 8081 of the nginx, you need to connect to http://nginx:8081

Http request two container each other with one nginx

I can not use curl to communicate between curator and dispenser container.
I want to access from curator container like curl http://dispenser.shadysmaoui.test.
these two container, I can request by http request from host normally.
docker-compose.yml
version: '3'
services:
webserver:
image: nginx:alpine
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./apps/curator/:/var/www/curator
- ./apps/dispenser/:/var/www/dispenser
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./docker/nginx/sites/:/etc/nginx/conf.d/
- ./docker/nginx/ssl/:/etc/ssl/
networks:
- app-network
curator:
build:
context: apps/curator
dockerfile: Dockerfile
image: digitalocean.com/php
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: curator
SERVICE_TAGS: dev
working_dir: /var/www/curator
volumes:
- ./apps/curator/:/var/www/curator
- ./docker/php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- app-network
dispenser:
build:
context: apps/dispenser
dockerfile: Dockerfile
image: digitalocean.com/php
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: dispenser
SERVICE_TAGS: dev
working_dir: /var/www/dispenser
volumes:
- ./apps/dispenser/:/var/www/dispenser
- ./docker/php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
dbdata:
driver: local
Between containers use the service name : curl http://webserver or https://webserver

Docker container host does not resolve when I am not connected to internet

I am unable to access my php container either by host name or ip address when I am not connected to internet. However, as soon as I connect to internet I can access container through ip or host name. I am on linux debian stretch.
/etc/hosts
127.0.0.1 test.dkr
docker-compose.yml
version: '3.2'
services:
web:
container_name: lamp_web
build:
context: .
dockerfile: ./images/php/Dockerfile
image: lamp-php-apache:7.2
ports:
- "80:80"
- "443:443"
volumes:
- ./www/lamp:/var/www/
- ./configs/php/lamp.ini:/usr/local/etc/php/lamp
- ./logs/apache:/var/log/apache2/lamp
tty: true
stdin_open: true
links:
- database:mysql
environment:
- APACHE_SERVERADMIN=admin#localhost
- APACHE_RUN_USER=www-data
- APACHE_RUN_GROUP=www-data
- APACHE_LOG_DIR=/var/log/apache2
- APACHE_PID_FILE=/var/run/apache2.pid
- APACHE_RUN_DIR=/var/run/apache2
- APACHE_LOCK_DIR=/var/lock/apache2
- XDEBUG_CONFIG=remote_host=host.docker.internal remote_port=9000 remote_enable=1
networks:
lamp:
ipv4_address: 172.19.0.3
database:
container_name: lamp_mysql
build:
context: .
dockerfile: ./images/mysql/Dockerfile
image: lamp-mysql:5.7
ports:
- "3306:3306"
volumes:
- ./data/mysql:/var/lib/mysql
networks:
- lamp
networks:
lamp:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.19.0.0/24

How to open phpmyadmin as domain in localhost?

I have one docker-compose file that have nginx, mysql and phpmyadmin.
phpmyadmin is open in port 8080 in this address mydomain.com:8080 as well.
How can I convert this address to http://phpmyadmin.mydomain.com or http://pma.mydomain.com in my server?
I have used this docker-compose file:
version: '3'
services:
#PHP Service
app:
build:
context: .
dockerfile: ./Dockerfile/Dockerfile
container_name: app
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: app
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./Beh:/var/www
- ./Config/php/local.ini:/usr/local/etc/php/conf.d/local.ini
networks:
- app-network
#Nginx Service
webserver:
image: nginx:alpine
container_name: webserver
restart: unless-stopped
tty: true
ports:
- "80:80"
# - "443:443"
volumes:
- ./Beh:/var/www
- ./Config/nginx/conf.d/:/etc/nginx/conf.d/
networks:
- app-network
database:
image: mariadb:latest
container_name: database
environment:
- "MYSQL_USERNAME=root"
- "MYSQL_ROOT_PASSWORD=secret"
ports:
- "3306:3306"
networks:
- app-network
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
environment:
- "MYSQL_USERNAME=root"
- "MYSQL_ROOT_PASSWORD=secret"
- "PMA_HOST=database"
links:
- database
ports:
- "8080:80"
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
#Volumes
volumes:
dbdata:
driver: local
I don't know how to configure nginx file to achieve this goal.
You can use the repo -> https://github.com/jwilder/nginx-proxy
This is what I'm using to create new local domain name for each project.
You have to add "VIRTUAL_HOST" to the environment, expose the ports ("expose: 80") and also add the new domain in your /etc/hosts.

Share Docker container through local network and access to it from an another host

I try to share a container through my local network, to access this container from an another machine on the same network. I have follow tihs tutorial (section "With macvlan devices") and I succeeded to share a simple web container and access from an another host.
But the container that I want to share is a little more sophisticated, because he comminicate with other containers on the host through an internal network on the host.
I try to bind my existing container created in my docker-compose but I can't access to it. Can you help me, or tell me where I'm wrong if so please ?
This is my docker-compose :
version: "2"
services:
baseimage:
container_name: baseimage
image: base
build:
context: ./
dockerfile: Dockerfile.base
web:
container_name: web
image: web
env_file:
- .env
context: ./
dockerfile: Dockerfile.web
extra_hosts:
- dev.api.exemple.com:127.0.0.1
- dev.admin.exemple.com:127.0.0.1
- dev.www.exemple.com:127.0.0.1
ports:
- 80:80
- 443:443
volumes:
- ./code:/ass
- /var/run/docker.sock:/var/run/docker.sock
tty: true
dns:
- 8.8.8.8
- 8.8.4.4
links:
- mysql
- redis
- elasticsearch
- baseimage
networks:
devbox:
ipv4_address: 172.20.0.2
cron:
container_name: cron
image: cron
build:
context: ./
dockerfile: Dockerfile.cron
volumes:
- ./code:/ass
tty: true
dns:
- 8.8.8.8
- 8.8.4.4
links:
- web:dev.api.exemple.com
- mysql
- redis
- elasticsearch
- baseimage
networks:
devbox:
ipv4_address: 172.20.0.3
mysql:
container_name: mysql
image: mysql:5.6
ports:
- 3306:3306
networks:
devbox:
ipv4_address: 172.20.0.4
redis:
container_name: redis
image: redis:3.2.4
ports:
- 6379:6379
networks:
devbox:
ipv4_address: 172.20.0.5
elasticsearch:
container_name: elastic
image: elasticsearch:2.3.4
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- ./es_data:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
devbox:
ipv4_address: 172.20.0.6
chromedriver:
container_name: chromedriver
image: robcherry/docker-chromedriver:latest
privileged: true
ports:
- 4444:4444
environment:
- CHROMEDRIVER_WHITELISTED_IPS='172.20.0.2'
- CHROMEDRIVER_URL_BASE='wd/hub'
- CHROMEDRIVER_EXTRA_ARGS='--ignore-certificate-errors'
networks:
devbox:
ipv4_address: 172.20.0.7
links:
- web:dev.www.exemple.com
networks:
devbox:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
- subnet: 172.20.0.0/16
gateway: 172.20.0.1
Create an external network assign the external network and devbox network to web. Web would then be publicly accessible via the external network public ip address and communicate with the internal services using the devbox network.
Will post working example asap

Resources