How to access docker container using localhost address - docker

I am trying to access a docker container from another container using localhost address.
The compose file is pretty simple. Both containers ports are exposed.
There are no problems when building.
In my host machine I can successfully execute curl http://localhost:8124/ and get a response.
But inside the django_container when trying the same command I get Connection refused error.
I tried adding them in the same network, still result didn't change.
Well if I try to execute with the internal ip of that container like curl 'http://172.27.0.2:8123/' I get the response.
Is this the default behavior? How can I reach clickhouse_container using localhost?
version: '3'
services:
django:
container_name: django_container
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
container_name: clickhouse_container
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"

So with this line here - "8124:8123" you're mapping the port of clickhouse container to localhost 8124. Which allows you to access clickhouse from localhost at port 8124.
If you want to hit clickhouse container from within the dockerhost network you have to use the hostname for the container. This is what I like to do:
version: '3'
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
hostname: clickhouse
container_name: clickhouse
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
If you make the changes like I have made above you should be able to access clickhouse from within the django container like this curl http://clickhouse:8123.

As in #Billy Ferguson's answer, you can visit using localhost in host machine just because: you define a port mapping to route localhost:8124 to clickhouse:8123.
But when from other container(django), you can't. But if you insist, there is a ugly workaround: share host's network namespace with network_mode, but with this the django container will just share all network of host.
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
network_mode: "host"

It depends of config.xml settings. If in config.xml <listen_host> 0.0.0.0</listen_host> you can use clickhouse-client -h your_ip --port 9001

Related

Connecting docker container application to remote database server

How can I make my spring boot application running inside docker containers connect to postgres database that is running in remote server (non-docker environment). Here is my docker-compose.yml file:
version: "3.3"
services:
app1:
image: repo/app1:latest
ports:
- 8000:8000
restart: always
network_mode: "host"
extra_hosts:
- 'postgresdb:192.168.2.50'
app2:
image: repo/app2:latest
ports:
- 8001:8001
restart: always
network_mode: "host"
extra_hosts:
- 'postgresdb:192.168.2.50'
IP of remote PostgreSQL database machine is: 192.168.2.50(hostname: postgresdb)
I am using network_mode: "host" option and works without any problem but I believe this would defeat the purpose of using docker network. What other options are available to make this work without using network_mode? IP address and necessary ports on both, the docker machine and remote database server, are all whitelisted and have access through the firewalls.
Such implementation, obviously will not work.
Since your database deployed remotely, the only working solution will be provided with environment variables.
version: "3.3"
services:
app1:
image: repo/app1:latest
ports:
- 8000:8000
restart: always
network_mode: "host"
environment:
- DBHOST: "192.168.2.50"
All you need in your application is to use this variable.
Python example:
dbhost = os.getenv("DBHOST")

Access docker ports from a container inside another container at localhost

I have a setup where I build 2 dockers with docker-compose.
1 container is a web application. I can access it with port 8080. Another container is ElasticSearch; it's accessible with port 9200.
This is the content of my docker-compose.yml file:
version: '3'
services:
serverapplication:
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
elasticsearch:
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
When I browse to http://localhost:8080/serverapplication I can see my server application.
When I browse to http://localhost:9200/ I can see the default page of ElasticSearch.
But when I try to access ElasticSearch from inside the serverapplication, I get a "connection refused". It seems that the 9200 port is unreachable at localhost for the server application.
How can I fix this?
It's never safe to use localhost, since localhost means something else for your host system, for elasticsearch and for your server application. You're only able to access the containers from your host's localhost because you're mapping container ports onto your host's ports.
put them in the same network
give the containers a name
access elasticsearch through its containername, which Docker automatically resolves to the current IP of your elasticsearch container.
Code:
version: '3'
services:
serverapplication:
container_name: serverapplication
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
networks:
- my-network
elasticsearch:
container_name: elasticsearch
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- my-network
networks:
my-network:
driver: bridge
Your server application must use the host name elasticsearch to access elasticsearch service i.e., http://elasticsearch:9200
Your serverapplication and elasticsearch are running in different containers. The localhost of serverapplication is different from localhost of elasticsearch.
docker-compose sets up a network between the containers such that they can be accessed with their service names. So from your serverapplication, you must use the name 'elasticsearch' to connect to it.

CouchDB not running on Docker image

I am trying to learn server-side swift and am having success deploying via Heroku as a Docker container but am struggling to get my database working when using couchdb with it. The database runs fine running locally but I can't seem to get it to run in the Docker container.
My current Dockerfile is as follows:
FROM ibmcom/swift-ubuntu:5.0.2
WORKDIR /ServerSideSwift
COPY . .
RUN swift build -c release
CMD .build/release/ServerSideSwift
So to add couchdb to this I tried to create a docker-compose.yml that looks like this:
version: "3.7"
services:
web:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
links:
- db
db:
image: couchdb
ports:
- "5984:5984"
Building the image works fine and running works well too but when it tried to create a new database(in swift) i get the errors i put in the swift code that show couchdb isnt running and therefore cant create any new databases.
Can anyone see where i am going wrong?
Update 3: my current docker-compose.yml:
version: "3.7"
networks:
app-net:
driver: bridge
services:
app:
build: .
ports:
- "8080:8080"
networks:
- app-net
db:
image: couchdb
ports:
- "5984:5984"
environment:
COUCHDB_USER: Test
COUCHDB_PASSWORD: test
networks:
- app-net
First, change your connection string from "localhost" to "DB" to use Docker DNS. Then change the connection param to not use encryption.
CouchDB is accessible by default on localhost which will be localhost
inside the container since you are using docker.
you can try exec inside the CouchDB container and run curl
localhost:5984 and it should work.
If you want to allow certain IPs to connect to your CouchDB server then you should use bind_address config_docs.
To allow all IPs use bind_address = 0.0.0.0 in local.ini.
bind_address
Defines the IP address by which CouchDB will be accessible.
[httpd]
bind_address = 127.0.0.1
To let CouchDB listen any available IP address, just setup 0.0.0.0 value:
[httpd]
bind_address = 0.0.0.0
Add this config in your custom local.ini file and mount it inside the couchdb container in this path /opt/couchdb/etc/.
version: "3.7"
networks:
app-net:
driver: bridge
services:
app:
build: .
ports:
- "8080:8080"
networks:
- app-net
db:
image: couchdb
ports:
- "5984:5984"
environment:
COUCHDB_USER: Test
COUCHDB_PASSWORD: test
volumes:
- path_to_local.ini:/opt/couchdb/etc/
networks:
- app-net

How to connect localhost with another host name

I want to know how to connect localhost with another host name.
I tried using extra_host but it did not go well.
Is the writing style of docker-compose.yml wrong?
thanks.
docker-compose.yml
version: "3.2"
services:
od-app:
build: ./app
ports:
- 3000:3000
- 80:3000
volumes:
- ./app/src:/var/www/html
links:
- od-api:api.localhost*
extra_hosts:
- "test.example.com:127.0.0.1"
od-api:
build: ./api
ports:
- 8080:80
volumes:
- ./api/src:/var/www/html
- /var/www/html/node_modules
extra_hosts in docker-compose.yaml just add the dns mapping 127.0.0.1 test.example.com to container's /etc/hosts.
This means this dns mapping just effect inside the container, not be able to visit on host. If you want to visit container's service like using test.example.com:80 from host, you should add this mapping in host's /etc/hosts instead.

Mapping ports in docker-compose file doesn't work. Network unreachable

I'm trying to map a port from my container, to a port on the host following the docs but it doesn't appear to be working.
After I run docker-compose -f development.yml up --force-recreate I get no errors. But if I try to reach the frontend service using localhost:8081 the network is unreachable.
I used docker inspect to view the IP and tried to ping that and still nothing.
Here is the docker-compose file I am using. And I doing anything wrong?
development.yml
version: '3'
services:
frontend:
image: nginx:latest
ports:
- "8081:80"
volumes:
- ./frontend/public:/var/www/html
api:
image: richarvey/nginx-php-fpm:latest
ports:
- "8080:80"
restart: always
volumes:
- ./api:/var/www/html
environment:
APPLICATION_ENV: development
ERRORS: 1
REMOVE_FILES: 0
links:
- db
- mq
db:
image: mariadb
restart: always
volumes:
- ./data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: dEvE10pMeNtMoDeBr0
mq:
image: rabbitmq:latest
restart: always
environment:
RABBITMQ_DEFAULT_USER: developer
RABBITMQ_DEFAULT_PASS: dEvE10pMeNtMoDeBr0
You are using docker toolbox. Docker toolbox uses docker machine. In Windows with docker toolbox, you are running under a virtualbox with its own IP, so localhost is not where your containers live. You will need to go 192.168.99.100:8081 to find your frontend.
As per the documentation on docker machine(https://docs.docker.com/machine/get-started/#run-containers-and-experiment-with-machine-commands):
$ docker-machine ip default
192.168.99.100

Resources