docker-compose resolve hostname in url - docker

Tried looking around but couldn't find anything close to what I need.
I have a docker-compose file with a docker container (web) that uses another container's IP (api) in its environment variable by resolving the hostname:
version: '3'
services:
web:
build: ../client/
ports:
- "5000:5000"
- "3000:3000"
environment:
REACT_APP_API_DEV: http://api:8000/server/graphql
api:
build: ../server/
env_file:
- server_variables.env
ports:
- "8000:8000"
redis:
image: "redis:alpine"
My issue is that web doesn't resolve this variable when it's running. I can ping api just fine inside the web container but http://api:8000 doesn't resolve properly. I also tried making HOST=api the variable and building the URI manually but that doesn't work either.
EDIT: I added a complete docker-compose.yml file for reference. I can curl the api just fine from inside the web container, but my app can't seem to resolve it properly. I'm using NodeJS and React

Alright, I found the issue. Apparently, my web container was fetching from api with the http://api:8000 URI but my browser doesn't know what api is (only the containers do).
I followed the stuff suggested in here to resolve the hostname on my machine and it worked out.

You have to link them using network
version: '3'
services:
web:
...
environment:
- HOST: http://api:8000
networks:
- my-network
...
api:
networks:
- my-network
...
networks:
my-network:

Related

Fail to resolve docker compose service name from front end

Hi I'm new to using docker for development. I'm trying to communicate from frontend (react) to the backend (express.js) here.
I have enabled cors as well, I'm getting an error saying net::ERR_NAME_NOT_RESOLVED when trying to fetch from the back end using the url http://backend:4001,
but it's working when I use the docker internal IPAddress, like: http://172.18.0.3:4001.
Following is my docker-compose.yml file.
Please advise on getting this working, thanks.
version: "3"
services:
backend:
build: ./api
volumes:
- ./api:/usr/src/api
ports:
- 6002:4001
depends_on:
- database
database:
image: mongo:4.0.15-xenial
ports:
- 27018:27017
frontend:
build: ./app
volumes:
- ./app:/usr/src/app
ports:
- 6001:3000
links:
- backend
depends_on:
- backend
It will not work, because your browser(internet client) is not part of docker stack network, you have to configure you frontend service to connect to http://localhost:6002 instead of http://backend:4001

How to access docker container using localhost address

I am trying to access a docker container from another container using localhost address.
The compose file is pretty simple. Both containers ports are exposed.
There are no problems when building.
In my host machine I can successfully execute curl http://localhost:8124/ and get a response.
But inside the django_container when trying the same command I get Connection refused error.
I tried adding them in the same network, still result didn't change.
Well if I try to execute with the internal ip of that container like curl 'http://172.27.0.2:8123/' I get the response.
Is this the default behavior? How can I reach clickhouse_container using localhost?
version: '3'
services:
django:
container_name: django_container
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
container_name: clickhouse_container
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
So with this line here - "8124:8123" you're mapping the port of clickhouse container to localhost 8124. Which allows you to access clickhouse from localhost at port 8124.
If you want to hit clickhouse container from within the dockerhost network you have to use the hostname for the container. This is what I like to do:
version: '3'
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
hostname: clickhouse
container_name: clickhouse
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
If you make the changes like I have made above you should be able to access clickhouse from within the django container like this curl http://clickhouse:8123.
As in #Billy Ferguson's answer, you can visit using localhost in host machine just because: you define a port mapping to route localhost:8124 to clickhouse:8123.
But when from other container(django), you can't. But if you insist, there is a ugly workaround: share host's network namespace with network_mode, but with this the django container will just share all network of host.
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
network_mode: "host"
It depends of config.xml settings. If in config.xml <listen_host> 0.0.0.0</listen_host> you can use clickhouse-client -h your_ip --port 9001

Docker hostnames are not resolved in a custom network

I have the following configuration in my docker-composer.yml file.
version: '3.3'
services:
service-1:
container_name: 'service-1'
build: './service-1'
depends_on:
- 'mongo'
- 'consul'
networks:
backend:
aliases:
- service-1
service-2:
build: './service-2'
ports:
- '8825:8825'
- '8835:8835'
networks:
frontend:
backend:
aliases:
- service-2
depends_on:
- 'mongo'
- 'consul'
consul:
image: 'consul:latest'
networks:
backend:
aliases:
- consul
mongo:
image: 'mongo:latest'
networks:
backend:
aliases:
- mongo
networks:
frontend:
backend:
internal: true
When my containers start they are not able to communicate between each other using host names.
Most of containers use the mongo db container, but they are not able even reach it and I am getting the following error.
Error connecting to mongo : no reachable servers
Please help me to solve the problem, I got stuck.
Thanks.
You've got a lot of unneeded settings in the compose file, here's a stripped down version that would work just as well:
version: '3.3'
services:
service-1:
build: './service-1'
networks:
- backend
service-2:
build: './service-2'
ports:
- '8825:8825'
- '8835:8835'
networks:
- frontend
- backend
consul:
image: 'consul:latest'
networks:
- backend
mongo:
image: 'mongo:latest'
networks:
- backend
networks:
frontend:
backend:
internal: true
You automatically get the alias of the service name for each container, no need to duplicate that. You also lose the ability to scale a service if you give it a container name. I'd also recommend moving the build step out of the compose file and use an image name for the apps you're building locally.
Now for the likely issue, you have a depends_on in your compose file. At best, this will not do what you're looking for. All it checks that the other container has been created and started, but not that the application inside is ready to serve traffic, and a DB may take time to become available. At worst, you'll get an error that it's unsupported if you try to move this into swarm mode.
Instead of depending on docker for this, update your application entrypoint to check for the external dependencies and wait a minute or two for them to become available before failing. A very simple example tool for this is wait-for-it that is written as a bash shell script.

Docker-compose doesn't resolve DNS to correct service

I have two services, web and helloworld. The following is my docker-compose YAML file:
version: "3"
services:
helloworld:
build: ./hello
volumes:
- ./hello:/usr/src/app
ports:
- 5001:80
web:
build: ./web
volumes:
- ./web:/usr/share/nginx/html
ports:
- 5000:80
depends_on:
- helloworld
Inside the index.html in web, I made a button that opens http://helloworld when clicked on. However, my button ends up going to helloworld.com instead of the correct service. Both services work fine when I do localhost:5001 and localhost:5000. Am I missing something?
Docker's embedded DNS for service discovery is for container-to-container networking. For connections from outside of docker (e.g. from your browser) you need to publish the port (e.g. 5000 and 5001 in your file) and connect to that published port.
To use the container-to-container networking, you would need the DNS lookup to happen inside of the web container and the connection to go from web to helloworld, instead of from your browser to the container.
Edit: from your comment, you may find a reverse proxy helpful. Traefik and nginx-proxy are two examples out there. You can configure these to forward to containers by hostname or by a virtual path, and in your situation, I think path based routing would be easier. The resulting compose file would look something like:
version: "3"
services:
traefik:
image: traefik
command: --docker --docker.watch
volumes:
- /var/lib/docker.sock:/var/lib/docker.sock
ports:
- 8080:80
helloworld:
build: ./hello
volumes:
- ./hello:/usr/src/app
labels:
- traefik.frontend.rule=PathPrefixStrip:/helloworld
- traefik.port=80
web:
build: ./web
volumes:
- ./web:/usr/share/nginx/html
labels:
- traefik.frontend.rule=PathPrefixStrip:/
- traefik.port=80
The above is all untested off the top of my head configuration, but should get you in the right direction. With the PathPrefixStrip rule, you can make a link in web to "/helloworld" which will go to the other container. And since the link doesn't have a hostname or port, it will go to the same traefik hostname/port you are already using.

How to use ipaddreses instead of container names in docker compse networking

I'm using docker compose for a web application that I'm creating with asp.net core, postgres and redis. I have everything set up in compose to connect to postgres using the service name I've specified in the docker-compose.yml file. When trying to do the same with redis, I get an exception. After doing research it turns out this exception is a known issue and the work around is using the ip address of the the machine instead of a host name. However I cannot figure out how to get the ipaddress of the redis service from the compose file. Is there a way to do that?
Edit
Here is the compose file
version: "3"
services:
postgres:
image: 'postgres:9.5'
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5433:5432'
redis:
image: 'redis:3.0-alpine'
command: redis-server --requirepass devpassword
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6378:6379'
web:
build: .
env_file:
- '.env'
ports:
- "8000:80"
volumes:
- './src/edb/Controllers:/app/Controllers'
- './src/edb/Views:/app/Views'
- './src/edb/wwwroot:/app/wwwroot'
- './src/edb/Lib:/app/Lib'
volumes:
postgres:
redis:
Ok, I found the answer. It was something I was trying but didn't realize the address may change everytime you restart the containers.
Run docker ps to get a list of running contianers then copy the id of your container and run docker inspect {container_id} and that will output the ipaddress that you can access it with from within the other running containers.
The reason I was confused was because that address may change when the containers are started. So I had to guess what the ip address was going to be before I started the containers. Luckly after 5 times I guessed correctly.

Resources