I want to access service1 from inside of service2 container by using localhost:5432. How can do so?
This is what my docker compose currently looks like:
services:
service1:
image: postgres:12
ports:
- '172.10.1.1:5432:5432'
expose:
- '5432'
environment:
- POSTGRES_USER=project
- POSTGRES_PASSWORD=pass
volumes:
- db_data:/var/lib/postgresql/data
service2:
build: .
ports:
- '172.10.1.1:1234:1234'
Please note I know i can access it by using service1:5432 or just service1. But I would like to use localhost if possible.
It is not possible, because each container has a own ip.
But there is a workaround:
Set network to host. So the ports are open on hostmaschine and are accessible via 127.0.0.1. Not working on windows.
But I don't know any good reason why you like to use localhost for postgres? Are you trying to authenticate via localhost? Don't do that - use a password instead.
Using host network maybe a solution you are finding
https://docs.docker.com/network/host/
services:
service1:
image: postgres:12
network_mode: host
expose:
- '5432'
environment:
- POSTGRES_USER=project
- POSTGRES_PASSWORD=pass
volumes:
- db_data:/var/lib/postgresql/data
service2:
build: .
network_mode: host
Related
I have a unique situation where I need to be able to access a container over a custom local domain (example.test), which I've added to my /etc/hosts file which points to 127.0.0.1. The library I'm using for OIDC uses this domain for redirecting the browser and if it is an internal docker hostname, obviously the browser will not resolve.
I've tried pointing it to example.test, but it says it cannot connect. I've also tried looking up the private ip of the docker network, and that just times out.
Add the network_mode: host to the service definition of the calling application in the docker-compose.yml file. This allows calls to localhost to be routed to the server's localhost and not the container's localhost.
E.g.
docker-compose.yml
version: '3.7'
services:
mongodb:
image: mongo:latest
restart: always
logging:
driver: local
environment:
MONGO_INITDB_ROOT_USERNAME: ${DB_ADMIN_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${DB_ADMIN_PASSWORD}
ports:
- 27017:27017
volumes:
- mongodb_data:/data/db
callingapp:
image: <some-img>
restart: always
logging:
driver: local
env_file:
- callingApp.env
ports:
- ${CALLING_APP_PORT}:${CALLING_APP_PORT}
depends_on:
- mongodb
network_mode: host // << Add this line
app:
image: <another-img>
restart: always
logging:
driver: local
depends_on:
- mongodb
env_file:
- app.env
ports:
- ${APP_PORT}:${APP_PORT}
volumes:
mongodb_data:
---
version: "3.6"
services:
postgres:
image: postgres:alpine
restart: on-failure
environment:
- POSTGRES_USER=${APP_POSTGRES_USER:-postgres}
- POSTGRES_PASSWORD=${APP_POSTGRES_PASS:-postgres}
- POSTGRES_DB=${APP_POSTGRES_DB:-my_proj}
ports:
- "5566:5432"
server:
container_name: my_proj_app
hostname: my_proj_app
build:
context: .
depends_on:
- postgres
network_mode: host
environment:
- PORT=8080
- HOST=my_proj_app
ports:
- "8080:8080"
Here is my docker-compose.yml
I can't ping google.com from my_proj_app container.
Have anybody ideas what I'm doing wrong?
The error is explained here https://docs.docker.com/network/host/ : you used the host network mode which cannot access outside network, unlike the NAT.
Host mode networking can be useful to optimize performance, and in situations where a container needs to handle a large range of ports, as it does not require network address translation (NAT), and no “userland-proxy” is created for each port.
Try commenting network_mode: host.
So I have this docker compose file
version: "2.1"
services:
nginx:
image: pottava/proxy
ports:
- 8080:80
environment:
- PROXY_URL=http://transmission-container:5080/
- BASIC_AUTH_USER=admin
- BASIC_AUTH_PASS=admin
- ACCESS_LOG=true
transmission:
image: linuxserver/transmission
container_name: transmission-container
ports:
- 5080:9091
restart: unless-stopped
I'm new to docker compose and trying it out for the first time. I need to be able to access the transmission service via http://localhost:8080 but nginx is returning a 502.
How should I change my compose file so that http://localhost:8080 will connect to the transmission service?
How can I make the transmission service not accessible via http://localhost:5080 and only accessible via http://localhost:8080 using docker compose?
I have tested the code below, it is working
version: "2.1"
services:
nginx:
image: pottava/proxy
ports:
- 8080:80
environment:
- PROXY_URL=http://transmission-container:9091/
- BASIC_AUTH_USER=admin
- BASIC_AUTH_PASS=admin
- ACCESS_LOG=true
transmission:
image: linuxserver/transmission
container_name: transmission-container
expose:
- "9091"
restart: unless-stopped
You no need to expose port 5080 to the host, the Nginx container can access directly the container port. The proxy URL needs to point to port 9091. Now you can't directly access the transmission service but need to go though the proxy server.
You should be able to access the other container using the service name and container port:
- PROXY_URL=http://transmission:9091/
If you do not want to access the transmission service from locahost, do not declare the host port:
ports:
- 9091
I have a setup where I build 2 dockers with docker-compose.
1 container is a web application. I can access it with port 8080. Another container is ElasticSearch; it's accessible with port 9200.
This is the content of my docker-compose.yml file:
version: '3'
services:
serverapplication:
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
elasticsearch:
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
When I browse to http://localhost:8080/serverapplication I can see my server application.
When I browse to http://localhost:9200/ I can see the default page of ElasticSearch.
But when I try to access ElasticSearch from inside the serverapplication, I get a "connection refused". It seems that the 9200 port is unreachable at localhost for the server application.
How can I fix this?
It's never safe to use localhost, since localhost means something else for your host system, for elasticsearch and for your server application. You're only able to access the containers from your host's localhost because you're mapping container ports onto your host's ports.
put them in the same network
give the containers a name
access elasticsearch through its containername, which Docker automatically resolves to the current IP of your elasticsearch container.
Code:
version: '3'
services:
serverapplication:
container_name: serverapplication
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
networks:
- my-network
elasticsearch:
container_name: elasticsearch
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- my-network
networks:
my-network:
driver: bridge
Your server application must use the host name elasticsearch to access elasticsearch service i.e., http://elasticsearch:9200
Your serverapplication and elasticsearch are running in different containers. The localhost of serverapplication is different from localhost of elasticsearch.
docker-compose sets up a network between the containers such that they can be accessed with their service names. So from your serverapplication, you must use the name 'elasticsearch' to connect to it.
I currently have the following setup:
# https://github.com/SeleniumHQ/docker-selenium
version: "3"
services:
selenium-hub:
image: ${DOCKER_REGISTRY}selenium/hub:2.53.1-americium
container_name: selenium-hub
ports:
- 4444:4444
environment:
- NODE_MAX_SESSION=5
- GRID_DEBUG=false
selenium-chrome:
image: ${DOCKER_REGISTRY}selenium/node-chrome-debug:2.53.1-americium
container_name: chrome
ports:
- 5900:5900
depends_on:
- selenium-hub
environment:
- HUB_PORT_4444_TCP_ADDR=selenium-hub
- HUB_PORT_4444_TCP_PORT=4444
- SHM-SIZE=2g
- SCREEN_WIDTH=2560
- SCREEN_HEIGHT=1440
- GRID_DEBUG=false
volumes:
- /tmp/
- /dev/shm/:/dev/shm/
tomcat:
build:
context: .
args:
ARTIFACTORY: ${DOCKER_REGISTRY}
container_name: tomcat
restart: on-failure
ports:
- 8080:8080
depends_on:
- db
volumes:
- ./src/test/resources/tomcat/context.xml:/opt/tomcat/conf/context.xml
- ./src/test/resources/tomcat/tomcat-users.xml:/opt/tomcat/conf/tomcat-users.xml
The above config sets up a selenium hub and deploys a webapp to a tomcat container. The resources that are served will have a href in the likes of http://tomcat:8080/...
If I want to access these resources via href from the outside, the tomcat DNS will not be resolved as the DNS is only exposed inside the virtual container network. One resolution would be to expose that internal DNS to the host machine, but I have no idea how.
Another would be to do a string replace of the href value and replace tomcat to localhost but that looks kind of dirty.
Anyone of you guys know how I can expose the internal DNS to the host machine?
Answer can be found at https://docs.docker.com/config/containers/container-networking/
Exposing /etc/hosts and /etc/resolv.conf