Current compose yaml:
version: '3.7'
networks:
app-tier:
driver: bridge
services:
php:
container_name: docker_php
build: .docker/php73
volumes:
- .:/srv/
networks:
- app-tier
rabbitmq:
container_name: docker_rabbitmq
image: "rabbitmq:3-management"
hostname: "rabbitmq-localhost"
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
ports:
- "15672:15672"
- "5672:5672"
networks:
- app-tier
My target is to reach docker_rabbitmq container from docker_php within localhost:
#bash php_container
telnet loaclhost 15672
How can I configure a network that:
container A has port mapping on localhost to Container B?
you're limited by the inner port, which means if your two containers are in the same docker defined network, you can use the internally opened ports of the respective container. For the hostname to be defined for a container in a different one, you can use the links attribute in the service definition inside your docker-compose.yml.
Consider a micro service which you want to be only accessed by only the containers on that network therefore exposing the ports on the host wouldn't make sense. Now assuming rabbitmq is the service that you want to access from php service, you need to define a link to rabbitmq in your php service definition( please not the link/host-definition is not bi-directional, if you need php in your rabbitmq you need to define a link in rabbitmq for php)
version: '3.7'
networks:
app-tier:
driver: bridge
services:
php:
container_name: docker_php
build: .docker/php73
volumes:
- .:/srv/
networks:
- app-tier
links:
- rabbitmq
rabbitmq:
container_name: docker_rabbitmq
image: "rabbitmq:3-management"
hostname: "rabbitmq-localhost"
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
networks:
- app-tier
Now you can access the internal ports of the rabbitmq from php but note the expternal ports are not accessible, those are for the host.
# inside your `php` container `bash`
telnet rabbitmq <internal_port>
Also not that I got rid of the ports in rabbitmq by removing
now these ports of rabbitmq are not accessible from the host.
Update
if you want to access the ports, such that the ports opened in rabbitmq are accessible in php on localhost. the easiest and the simplest way would be to configure rabbitmq to run in container network mode on the network of php to do this simple add
network_mode: "container:[container name/id]"
rabbitmq:
container_name: docker_rabbitmq
image: "rabbitmq:3-management"
hostname: "rabbitmq-localhost"
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
network_mode: "container:php"
ports:
- "15672:15672"
- "5672:5672"
Related
I need to resolve a container name to the IP Address from the docker host.
The reason for this is, i need a container to run on the host network, but it must be also able to resolve the container "backend" which it connects also to. (The container must be send & receive multicast packets)
docker-compose.yml
version: "3"
services:
database:
image: mongo
container_name: database
hostname: database
ports:
- "27017:27017"
backend:
image: "project/backend:latest"
container_name: backend
hostname: backend
environment:
- NODE_ENV=production
- DATABASE_HOST=database
- UUID=5025f846-7587-11ed-9ca7-8b992b5e7dd3
ports:
- "8080:8080"
depends_on:
- database
tty: true
frontend:
image: "project/frontend:latest"
container_name: frontend
hostname: frontend
ports:
- "80:80"
- "443:443"
depends_on:
- backend
environment:
- BACKEND_HOST=backend
connector:
image: "project/connector:latest"
container_name: connector
hostname: connector
ports:
- "1900:1900/udp"
#expose:
# - "1900/udp"
environment:
- NODE_ENV=production
- BACKEND_HOST=backend
- STARTUP_DELAY=1500
depends_on:
- backend
network_mode: host
tty: true
How can i resolve the hostname "backend" via docker from the docker host?
dig backend #127.0.0.11 & dig backend #172.17.0.1 did not work.
A test with a docker ubuntu image & socat proves, that i can receive ssdp multicast packets:
docker run --net host -it --rm ubuntu
socat UDP4-RECVFROM:1900,ip-add-membership=239.255.255.250:0.0.0.0,fork -
The only problem i now have is the DNS/Container name resolution from the host (network).
TL;DR
The container "connector" must be on the host network,but also be able to resolve the container name "backend" to the docker internal IP Address.
NOTE: Perhaps this is better suited on superuser or similar?
---
version: "3.6"
services:
postgres:
image: postgres:alpine
restart: on-failure
environment:
- POSTGRES_USER=${APP_POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${APP_POSTGRES_PASS:-postgres}
- POSTGRES_DB=${APP_POSTGRES_DB:-my_proj}
ports:
- "5566:5432"
server:
container_name: my_proj_app
hostname: my_proj_app
build:
context: .
depends_on:
- postgres
network_mode: host
environment:
- PORT=8080
- HOST=my_proj_app
ports:
- "8080:8080"
Here is my docker-compose.yml
I can't ping google.com from my_proj_app container.
Have anybody ideas what I'm doing wrong?
The error is explained here https://docs.docker.com/network/host/ : you used the host network mode which cannot access outside network, unlike the NAT.
Host mode networking can be useful to optimize performance, and in situations where a container needs to handle a large range of ports, as it does not require network address translation (NAT), and no “userland-proxy” is created for each port.
Try commenting network_mode: host.
Server 1->10.0.0.47
Server 2->10.0.1.202
All ports between these two servers are open as they are in same VPN in aws
version: '3.3'
networks:
net:
external: true
services:
backend:
image: test/test-backend:prod
ports:
- "8000:8000"
depends_on:
- discovery
ERROR:Connection refused
Note When i try to change the composer like below
connection with mongo established but unable to access the service on port 8000
networks:
net:
external: true
services:
backend:
image: test/test-backend:prod
expose:
- "27017:27017"
ports:
- "8000:8000"
depends_on:
- discovery
The Expose instruction does not change anything, it's for documentation only. You can read more about it in the Dockerfile reference.
If the 2 Server are in the same Docker network, you could change the mongoDB port to 8000 in its installation configuration. Then, you don't need to specify a port etc. in the docker-compose configuration.
If you want to access the mongoDB service from outside, you have to change the docker-compose configuration to:
ports:
- "8000:27017"
So I have this docker compose file
version: "2.1"
services:
nginx:
image: pottava/proxy
ports:
- 8080:80
environment:
- PROXY_URL=http://transmission-container:5080/
- BASIC_AUTH_USER=admin
- BASIC_AUTH_PASS=admin
- ACCESS_LOG=true
transmission:
image: linuxserver/transmission
container_name: transmission-container
ports:
- 5080:9091
restart: unless-stopped
I'm new to docker compose and trying it out for the first time. I need to be able to access the transmission service via http://localhost:8080 but nginx is returning a 502.
How should I change my compose file so that http://localhost:8080 will connect to the transmission service?
How can I make the transmission service not accessible via http://localhost:5080 and only accessible via http://localhost:8080 using docker compose?
I have tested the code below, it is working
version: "2.1"
services:
nginx:
image: pottava/proxy
ports:
- 8080:80
environment:
- PROXY_URL=http://transmission-container:9091/
- BASIC_AUTH_USER=admin
- BASIC_AUTH_PASS=admin
- ACCESS_LOG=true
transmission:
image: linuxserver/transmission
container_name: transmission-container
expose:
- "9091"
restart: unless-stopped
You no need to expose port 5080 to the host, the Nginx container can access directly the container port. The proxy URL needs to point to port 9091. Now you can't directly access the transmission service but need to go though the proxy server.
You should be able to access the other container using the service name and container port:
- PROXY_URL=http://transmission:9091/
If you do not want to access the transmission service from locahost, do not declare the host port:
ports:
- 9091
I have a setup where I build 2 dockers with docker-compose.
1 container is a web application. I can access it with port 8080. Another container is ElasticSearch; it's accessible with port 9200.
This is the content of my docker-compose.yml file:
version: '3'
services:
serverapplication:
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
elasticsearch:
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
When I browse to http://localhost:8080/serverapplication I can see my server application.
When I browse to http://localhost:9200/ I can see the default page of ElasticSearch.
But when I try to access ElasticSearch from inside the serverapplication, I get a "connection refused". It seems that the 9200 port is unreachable at localhost for the server application.
How can I fix this?
It's never safe to use localhost, since localhost means something else for your host system, for elasticsearch and for your server application. You're only able to access the containers from your host's localhost because you're mapping container ports onto your host's ports.
put them in the same network
give the containers a name
access elasticsearch through its containername, which Docker automatically resolves to the current IP of your elasticsearch container.
Code:
version: '3'
services:
serverapplication:
container_name: serverapplication
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
networks:
- my-network
elasticsearch:
container_name: elasticsearch
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- my-network
networks:
my-network:
driver: bridge
Your server application must use the host name elasticsearch to access elasticsearch service i.e., http://elasticsearch:9200
Your serverapplication and elasticsearch are running in different containers. The localhost of serverapplication is different from localhost of elasticsearch.
docker-compose sets up a network between the containers such that they can be accessed with their service names. So from your serverapplication, you must use the name 'elasticsearch' to connect to it.