I have a backuppc local service running e.g. 127.0.0.1:8081 .
I can also reach it directly on http://172.23.0.4 (container ip)
docker-compose.yml
version: '3.7'
services:
backuppc-app:
image: tiredofit/backuppc
container_name: backuppc-app
ports:
- "8081:80"
- "8082:10050"
environment:
- BACKUPPC_UUID=1000
- BACKUPPC_GUID=1000
restart: always
depends_on:
- backuppc-mysql
networks:
- nginx-proxy
I want to assign it a hostname, something like
hostname: backup.local
I tried to add it but doesn't work as expected
backuppc-app:
image: tiredofit/backuppc
container_name: backuppc-app
hostname: backup.local
Should I manually edit my local /ets/hosts ?
172.23.0.4 backup.local
You can add a hostname as a network alias:
version: '3.7'
services:
backuppc-app:
networks:
nginx-proxy:
aliases:
- backup.local
For containers in nginx-proxy network it will be available both as backuppc-app and as backup.local.
If you want that hostname to be visible to your host you need to modify hosts file. But don't put container IP there - it can change. Rather add it as another name for localhost:
127.0.0.1 localhost myhostname backup.local
Then you can access it both with localhost:8081 and backup.local:8081 (that works due to port forwarding you've declared with ports: key).
Related
I have two containers with different default networks and I want to communicate with each other. First I made a common network named "my_network" then enter into the container1's bash. I could ping the other container by name (container2) but when I tried to check the telnet with port 4000, I got the error:
telnet: Unable to connect to remote host: Connection refused
Curl request also didn't work. But, when I replaced the container name with the host's ip address (eg. 10.244.140.92), everything just worked fine. So what am I doing wrong?
My simplified compose:
version: "3.9"
networks:
my_network:
driver: bridge
external: true
default:
services:
container1:
image: ...
ports:
- 5000:80
- 5001:443
networks:
- default
- my_network
version: "3.9"
networks:
my_network:
services:
container2:
image: ...
ports:
- 4000:4000
networks:
- my_network
I have a unique situation where I need to be able to access a container over a custom local domain (example.test), which I've added to my /etc/hosts file which points to 127.0.0.1. The library I'm using for OIDC uses this domain for redirecting the browser and if it is an internal docker hostname, obviously the browser will not resolve.
I've tried pointing it to example.test, but it says it cannot connect. I've also tried looking up the private ip of the docker network, and that just times out.
Add the network_mode: host to the service definition of the calling application in the docker-compose.yml file. This allows calls to localhost to be routed to the server's localhost and not the container's localhost.
E.g.
docker-compose.yml
version: '3.7'
services:
mongodb:
image: mongo:latest
restart: always
logging:
driver: local
environment:
MONGO_INITDB_ROOT_USERNAME: ${DB_ADMIN_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${DB_ADMIN_PASSWORD}
ports:
- 27017:27017
volumes:
- mongodb_data:/data/db
callingapp:
image: <some-img>
restart: always
logging:
driver: local
env_file:
- callingApp.env
ports:
- ${CALLING_APP_PORT}:${CALLING_APP_PORT}
depends_on:
- mongodb
network_mode: host // << Add this line
app:
image: <another-img>
restart: always
logging:
driver: local
depends_on:
- mongodb
env_file:
- app.env
ports:
- ${APP_PORT}:${APP_PORT}
volumes:
mongodb_data:
So I have this docker compose file
version: "2.1"
services:
nginx:
image: pottava/proxy
ports:
- 8080:80
environment:
- PROXY_URL=http://transmission-container:5080/
- BASIC_AUTH_USER=admin
- BASIC_AUTH_PASS=admin
- ACCESS_LOG=true
transmission:
image: linuxserver/transmission
container_name: transmission-container
ports:
- 5080:9091
restart: unless-stopped
I'm new to docker compose and trying it out for the first time. I need to be able to access the transmission service via http://localhost:8080 but nginx is returning a 502.
How should I change my compose file so that http://localhost:8080 will connect to the transmission service?
How can I make the transmission service not accessible via http://localhost:5080 and only accessible via http://localhost:8080 using docker compose?
I have tested the code below, it is working
version: "2.1"
services:
nginx:
image: pottava/proxy
ports:
- 8080:80
environment:
- PROXY_URL=http://transmission-container:9091/
- BASIC_AUTH_USER=admin
- BASIC_AUTH_PASS=admin
- ACCESS_LOG=true
transmission:
image: linuxserver/transmission
container_name: transmission-container
expose:
- "9091"
restart: unless-stopped
You no need to expose port 5080 to the host, the Nginx container can access directly the container port. The proxy URL needs to point to port 9091. Now you can't directly access the transmission service but need to go though the proxy server.
You should be able to access the other container using the service name and container port:
- PROXY_URL=http://transmission:9091/
If you do not want to access the transmission service from locahost, do not declare the host port:
ports:
- 9091
Current compose yaml:
version: '3.7'
networks:
app-tier:
driver: bridge
services:
php:
container_name: docker_php
build: .docker/php73
volumes:
- .:/srv/
networks:
- app-tier
rabbitmq:
container_name: docker_rabbitmq
image: "rabbitmq:3-management"
hostname: "rabbitmq-localhost"
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
ports:
- "15672:15672"
- "5672:5672"
networks:
- app-tier
My target is to reach docker_rabbitmq container from docker_php within localhost:
#bash php_container
telnet loaclhost 15672
How can I configure a network that:
container A has port mapping on localhost to Container B?
you're limited by the inner port, which means if your two containers are in the same docker defined network, you can use the internally opened ports of the respective container. For the hostname to be defined for a container in a different one, you can use the links attribute in the service definition inside your docker-compose.yml.
Consider a micro service which you want to be only accessed by only the containers on that network therefore exposing the ports on the host wouldn't make sense. Now assuming rabbitmq is the service that you want to access from php service, you need to define a link to rabbitmq in your php service definition( please not the link/host-definition is not bi-directional, if you need php in your rabbitmq you need to define a link in rabbitmq for php)
version: '3.7'
networks:
app-tier:
driver: bridge
services:
php:
container_name: docker_php
build: .docker/php73
volumes:
- .:/srv/
networks:
- app-tier
links:
- rabbitmq
rabbitmq:
container_name: docker_rabbitmq
image: "rabbitmq:3-management"
hostname: "rabbitmq-localhost"
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
networks:
- app-tier
Now you can access the internal ports of the rabbitmq from php but note the expternal ports are not accessible, those are for the host.
# inside your `php` container `bash`
telnet rabbitmq <internal_port>
Also not that I got rid of the ports in rabbitmq by removing
now these ports of rabbitmq are not accessible from the host.
Update
if you want to access the ports, such that the ports opened in rabbitmq are accessible in php on localhost. the easiest and the simplest way would be to configure rabbitmq to run in container network mode on the network of php to do this simple add
network_mode: "container:[container name/id]"
rabbitmq:
container_name: docker_rabbitmq
image: "rabbitmq:3-management"
hostname: "rabbitmq-localhost"
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
network_mode: "container:php"
ports:
- "15672:15672"
- "5672:5672"
How to configure hostnames with domains in docker-compose.yml?
Let's say the service worker expects the service web on the http://web.local/ address. But web.local doesn't resolve to an ip address no matter what I configure using the hostname directive. Adding an extra_hosts directive doesn't work either as I should know the ip of the service web for that, which I don't as it is assigned by docker.
docker-compose.yml:
version: '3'
services:
worker:
build: ./worker
networks:
- mynet
web:
build: ./web
ports:
- 80:80
hostname: web.local
networks:
- mynet
networks:
mynet:
but ping web.local doesn't resolve inside the service worker
For this to work you need to add an alias in the network mynet.
From the official documentation:
Aliases (alternative hostnames) for this service on the network. Other
containers on the same network can use either the service name or this
alias to connect to one of the service’s containers.
So, your docker-compose.yml file should look like this:
version: '3'
services:
worker:
build: ./worker
networks:
- mynet
web:
build: ./web
ports:
- 80:80
hostname: web.local
networks:
mynet:
aliases:
- web.local
networks:
mynet: