I want to know how to connect localhost with another host name.
I tried using extra_host but it did not go well.
Is the writing style of docker-compose.yml wrong?
thanks.
docker-compose.yml
version: "3.2"
services:
od-app:
build: ./app
ports:
- 3000:3000
- 80:3000
volumes:
- ./app/src:/var/www/html
links:
- od-api:api.localhost*
extra_hosts:
- "test.example.com:127.0.0.1"
od-api:
build: ./api
ports:
- 8080:80
volumes:
- ./api/src:/var/www/html
- /var/www/html/node_modules
extra_hosts in docker-compose.yaml just add the dns mapping 127.0.0.1 test.example.com to container's /etc/hosts.
This means this dns mapping just effect inside the container, not be able to visit on host. If you want to visit container's service like using test.example.com:80 from host, you should add this mapping in host's /etc/hosts instead.
Related
I want to access service1 from inside of service2 container by using localhost:5432. How can do so?
This is what my docker compose currently looks like:
services:
service1:
image: postgres:12
ports:
- '172.10.1.1:5432:5432'
expose:
- '5432'
environment:
- POSTGRES_USER=project
- POSTGRES_PASSWORD=pass
volumes:
- db_data:/var/lib/postgresql/data
service2:
build: .
ports:
- '172.10.1.1:1234:1234'
Please note I know i can access it by using service1:5432 or just service1. But I would like to use localhost if possible.
It is not possible, because each container has a own ip.
But there is a workaround:
Set network to host. So the ports are open on hostmaschine and are accessible via 127.0.0.1. Not working on windows.
But I don't know any good reason why you like to use localhost for postgres? Are you trying to authenticate via localhost? Don't do that - use a password instead.
Using host network maybe a solution you are finding
https://docs.docker.com/network/host/
services:
service1:
image: postgres:12
network_mode: host
expose:
- '5432'
environment:
- POSTGRES_USER=project
- POSTGRES_PASSWORD=pass
volumes:
- db_data:/var/lib/postgresql/data
service2:
build: .
network_mode: host
I have a setup where I build 2 dockers with docker-compose.
1 container is a web application. I can access it with port 8080. Another container is ElasticSearch; it's accessible with port 9200.
This is the content of my docker-compose.yml file:
version: '3'
services:
serverapplication:
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
elasticsearch:
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
When I browse to http://localhost:8080/serverapplication I can see my server application.
When I browse to http://localhost:9200/ I can see the default page of ElasticSearch.
But when I try to access ElasticSearch from inside the serverapplication, I get a "connection refused". It seems that the 9200 port is unreachable at localhost for the server application.
How can I fix this?
It's never safe to use localhost, since localhost means something else for your host system, for elasticsearch and for your server application. You're only able to access the containers from your host's localhost because you're mapping container ports onto your host's ports.
put them in the same network
give the containers a name
access elasticsearch through its containername, which Docker automatically resolves to the current IP of your elasticsearch container.
Code:
version: '3'
services:
serverapplication:
container_name: serverapplication
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
networks:
- my-network
elasticsearch:
container_name: elasticsearch
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- my-network
networks:
my-network:
driver: bridge
Your server application must use the host name elasticsearch to access elasticsearch service i.e., http://elasticsearch:9200
Your serverapplication and elasticsearch are running in different containers. The localhost of serverapplication is different from localhost of elasticsearch.
docker-compose sets up a network between the containers such that they can be accessed with their service names. So from your serverapplication, you must use the name 'elasticsearch' to connect to it.
I am trying to access a docker container from another container using localhost address.
The compose file is pretty simple. Both containers ports are exposed.
There are no problems when building.
In my host machine I can successfully execute curl http://localhost:8124/ and get a response.
But inside the django_container when trying the same command I get Connection refused error.
I tried adding them in the same network, still result didn't change.
Well if I try to execute with the internal ip of that container like curl 'http://172.27.0.2:8123/' I get the response.
Is this the default behavior? How can I reach clickhouse_container using localhost?
version: '3'
services:
django:
container_name: django_container
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
container_name: clickhouse_container
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
So with this line here - "8124:8123" you're mapping the port of clickhouse container to localhost 8124. Which allows you to access clickhouse from localhost at port 8124.
If you want to hit clickhouse container from within the dockerhost network you have to use the hostname for the container. This is what I like to do:
version: '3'
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
hostname: clickhouse
container_name: clickhouse
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
If you make the changes like I have made above you should be able to access clickhouse from within the django container like this curl http://clickhouse:8123.
As in #Billy Ferguson's answer, you can visit using localhost in host machine just because: you define a port mapping to route localhost:8124 to clickhouse:8123.
But when from other container(django), you can't. But if you insist, there is a ugly workaround: share host's network namespace with network_mode, but with this the django container will just share all network of host.
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
network_mode: "host"
It depends of config.xml settings. If in config.xml <listen_host> 0.0.0.0</listen_host> you can use clickhouse-client -h your_ip --port 9001
What is the use of container_name in docker-compose.yml file? Can I use it as hostname which is nothing but the service name in docker-compose.yml file.
Also when I explicitly write hostname under services does it override the hostname represented by service name?
hostname: just sets what the container believes its own hostname is. In the unusual event you got a shell inside the container, it might show up in the prompt. It has no effect on anything outside, and there’s usually no point in setting it. (It has basically the same effect as hostname(1): that command doesn’t cause anything outside your host to know the name you set.)
container_name: sets the actual name of the container when it runs, rather than letting Docker Compose generate it. If this name is different from the name of the block in services:, both names will be usable as DNS names for inter-container communication. Unless you need to use docker to manage a container that Compose started, you usually don’t need to set this either.
If you omit both of these settings, one container can reach another (provided they’re in the same Docker Compose file and have compatible networks: settings) using the name of the services: block and the port the service inside the container is listening in.
version: '3'
services:
redis:
image: redis
db:
image: mysql
ports: [6033:3306]
app:
build: .
ports: [12345:8990]
env:
REDIS_HOST: redis
REDIS_PORT: 6379
MYSQL_HOST: db
MYSQL_PORT: 3306
The easiest answer is the following:
container_name: This is the container name that you see from the host machine when listing the running containers with the docker container ls command.
hostname: The hostname of the container. Actually, the name that you define here is going to the /etc/hosts file:
$ exec -it myserver /bin/bash
bash-4.2# cat /etc/hosts
127.0.0.1 localhost
172.18.0.2 myserver
That means you can ping machines by that names within a Docker network.
I highly suggest set these two parameters the same to avoid confusion.
An example docker-compose.yml file:
version: '3'
services:
database-server:
image: ...
container_name: database-server
hostname: database-server
ports:
- "xxxx:yyyy"
web-server:
image: ...
container_name: web-server
hostname: web-server
ports:
- "xxxx:xxxx"
- "5101:4001" # debug port
you can customize the image name to build & container name during docker-compose up for this, you need to mention like below in docker-compose.yml file.
It will create an image & container with custom names.
version: '3'
services:
frontend_dev:
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
build:
context: .
dockerfile: Dockerfile.dev
image: "mycustomname/sample:v1"
container_name: mycustomname_sample_v1
ports:
- '3000:3000'
volumes:
- /app/node_modules
- .:/app
I have two services, web and helloworld. The following is my docker-compose YAML file:
version: "3"
services:
helloworld:
build: ./hello
volumes:
- ./hello:/usr/src/app
ports:
- 5001:80
web:
build: ./web
volumes:
- ./web:/usr/share/nginx/html
ports:
- 5000:80
depends_on:
- helloworld
Inside the index.html in web, I made a button that opens http://helloworld when clicked on. However, my button ends up going to helloworld.com instead of the correct service. Both services work fine when I do localhost:5001 and localhost:5000. Am I missing something?
Docker's embedded DNS for service discovery is for container-to-container networking. For connections from outside of docker (e.g. from your browser) you need to publish the port (e.g. 5000 and 5001 in your file) and connect to that published port.
To use the container-to-container networking, you would need the DNS lookup to happen inside of the web container and the connection to go from web to helloworld, instead of from your browser to the container.
Edit: from your comment, you may find a reverse proxy helpful. Traefik and nginx-proxy are two examples out there. You can configure these to forward to containers by hostname or by a virtual path, and in your situation, I think path based routing would be easier. The resulting compose file would look something like:
version: "3"
services:
traefik:
image: traefik
command: --docker --docker.watch
volumes:
- /var/lib/docker.sock:/var/lib/docker.sock
ports:
- 8080:80
helloworld:
build: ./hello
volumes:
- ./hello:/usr/src/app
labels:
- traefik.frontend.rule=PathPrefixStrip:/helloworld
- traefik.port=80
web:
build: ./web
volumes:
- ./web:/usr/share/nginx/html
labels:
- traefik.frontend.rule=PathPrefixStrip:/
- traefik.port=80
The above is all untested off the top of my head configuration, but should get you in the right direction. With the PathPrefixStrip rule, you can make a link in web to "/helloworld" which will go to the other container. And since the link doesn't have a hostname or port, it will go to the same traefik hostname/port you are already using.