I am running my both program inside docker on local host.
While I send request from one container to another I am getting connection refused error.
One is running on 8000 port and another one is running on 8001.
I run my image using command docker run -p 8000:8000 service1 and vice versa.
I am trying to connect service running on 8000 from 8001.
I am getting error like:
connect ECONNREFUSED 0.0.0.0:8000
You need to use Docker-Compose with network mode as Host
network_mode: "host"
Check sample Docker-compose file :-
version: '2.1'
services:
#Governing microservices
api-gateway:
build: zuul-apigateway/
depends_on:
eureka-server:
condition: service_healthy
restart: always
network_mode: "host"
image: demo-zuul-service
hostname: localhost
ports:
- 9085:9085
healthcheck:
test: "exit 0"
eureka-server:
build: eureka-server/
restart: always
network_mode: "host"
image: demo-eureka-service
hostname: localhost
ports:
- 9083:9083
healthcheck:
test: "exit 0"
Now these both containers can communicate with each other As the are in Host network
Referance:-
https://github.com/thoopalliamar/Juggler/blob/master/docker-compose.yml
This is 13 microservice application which can commnicate with each other.
Related
I need to resolve a container name to the IP Address from the docker host.
The reason for this is, i need a container to run on the host network, but it must be also able to resolve the container "backend" which it connects also to. (The container must be send & receive multicast packets)
docker-compose.yml
version: "3"
services:
database:
image: mongo
container_name: database
hostname: database
ports:
- "27017:27017"
backend:
image: "project/backend:latest"
container_name: backend
hostname: backend
environment:
- NODE_ENV=production
- DATABASE_HOST=database
- UUID=5025f846-7587-11ed-9ca7-8b992b5e7dd3
ports:
- "8080:8080"
depends_on:
- database
tty: true
frontend:
image: "project/frontend:latest"
container_name: frontend
hostname: frontend
ports:
- "80:80"
- "443:443"
depends_on:
- backend
environment:
- BACKEND_HOST=backend
connector:
image: "project/connector:latest"
container_name: connector
hostname: connector
ports:
- "1900:1900/udp"
#expose:
# - "1900/udp"
environment:
- NODE_ENV=production
- BACKEND_HOST=backend
- STARTUP_DELAY=1500
depends_on:
- backend
network_mode: host
tty: true
How can i resolve the hostname "backend" via docker from the docker host?
dig backend #127.0.0.11 & dig backend #172.17.0.1 did not work.
A test with a docker ubuntu image & socat proves, that i can receive ssdp multicast packets:
docker run --net host -it --rm ubuntu
socat UDP4-RECVFROM:1900,ip-add-membership=239.255.255.250:0.0.0.0,fork -
The only problem i now have is the DNS/Container name resolution from the host (network).
TL;DR
The container "connector" must be on the host network,but also be able to resolve the container name "backend" to the docker internal IP Address.
NOTE: Perhaps this is better suited on superuser or similar?
I am trying to access a docker container from another container using localhost address.
The compose file is pretty simple. Both containers ports are exposed.
There are no problems when building.
In my host machine I can successfully execute curl http://localhost:8124/ and get a response.
But inside the django_container when trying the same command I get Connection refused error.
I tried adding them in the same network, still result didn't change.
Well if I try to execute with the internal ip of that container like curl 'http://172.27.0.2:8123/' I get the response.
Is this the default behavior? How can I reach clickhouse_container using localhost?
version: '3'
services:
django:
container_name: django_container
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
container_name: clickhouse_container
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
So with this line here - "8124:8123" you're mapping the port of clickhouse container to localhost 8124. Which allows you to access clickhouse from localhost at port 8124.
If you want to hit clickhouse container from within the dockerhost network you have to use the hostname for the container. This is what I like to do:
version: '3'
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
clickhouse:
hostname: clickhouse
container_name: clickhouse
build: ./clickhouse
ports:
- "9001:9000"
- "8124:8123"
- "9010:9009"
If you make the changes like I have made above you should be able to access clickhouse from within the django container like this curl http://clickhouse:8123.
As in #Billy Ferguson's answer, you can visit using localhost in host machine just because: you define a port mapping to route localhost:8124 to clickhouse:8123.
But when from other container(django), you can't. But if you insist, there is a ugly workaround: share host's network namespace with network_mode, but with this the django container will just share all network of host.
services:
django:
hostname: djano
container_name: django
build: ./django
ports:
- "8007:8000"
links:
- clickhouse:clickhouse
volumes:
- ./django:/usr/src/run
command: bash /usr/src/run/run.sh
network_mode: "host"
It depends of config.xml settings. If in config.xml <listen_host> 0.0.0.0</listen_host> you can use clickhouse-client -h your_ip --port 9001
I have a docker like this:
version: '3.5'
services:
RedisServerA:
container_name: RedisServerA
image: redis:3.2.11
command: "redis-server --port 26379"
volumes:
- ../docker/redis/RedisServerA:/data
ports:
- 26379:26379
expose:
- 26379
RedisServerB:
container_name: RedisServerB
image: redis:3.2.11
command: "redis-server --port 6379"
volumes:
- ../docker/redis/RedisServerB:/data
ports:
- 6379:6379
expose:
- 6379
Now I do a vagrant ssh and do
ping RedisServerA
ping RedisServerB
They both work.
Now I try to connect to the redis server:
redis-cli -h RedisServerB
Works fine
Then I try to connect to the other
redis-cli -h RedisServerA -p 26739
It says:
Could not connect to Redis at RedisServerA:26739: Connection refused
Could not connect to Redis at RedisServerA:26739: Connection refused
Twice.
What am I missing here?
Typically in this setup you'd let each container run on its "natural" port. For connections from outside Docker you need the ports: mapping, and you'd access a container via its published port on the host's IP address. For connections between Docker containers (assuming they're on the same network, and if you used bare docker run, you manually created that network), you use the container name and the container's internal port number.
We can clean up the docker-compose.yml file by removing some unnecessary lines (container_name: and expose: don't really have a practical effect) and letting the image run its default command: on the default port, and only remapping with ports:. We'd get:
version: '3.5'
services:
RedisServerA:
image: redis:3.2.11
volumes:
- ../docker/redis/RedisServerA:/data
ports:
- 26379:6379
RedisServerB:
image: redis:3.2.11
volumes:
- ../docker/redis/RedisServerB:/data
ports:
- 6379:6379
Between containers, you'd use the default port
redis-cli -h RedisServerA
redis-cli -h RedisServerB
From outside Docker you'd use the server's host name and the published ports
redis-cli -h server.example.com -p 23679
redis-cli -h server.example.com
I'm trying to map a port from my container, to a port on the host following the docs but it doesn't appear to be working.
After I run docker-compose -f development.yml up --force-recreate I get no errors. But if I try to reach the frontend service using localhost:8081 the network is unreachable.
I used docker inspect to view the IP and tried to ping that and still nothing.
Here is the docker-compose file I am using. And I doing anything wrong?
development.yml
version: '3'
services:
frontend:
image: nginx:latest
ports:
- "8081:80"
volumes:
- ./frontend/public:/var/www/html
api:
image: richarvey/nginx-php-fpm:latest
ports:
- "8080:80"
restart: always
volumes:
- ./api:/var/www/html
environment:
APPLICATION_ENV: development
ERRORS: 1
REMOVE_FILES: 0
links:
- db
- mq
db:
image: mariadb
restart: always
volumes:
- ./data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: dEvE10pMeNtMoDeBr0
mq:
image: rabbitmq:latest
restart: always
environment:
RABBITMQ_DEFAULT_USER: developer
RABBITMQ_DEFAULT_PASS: dEvE10pMeNtMoDeBr0
You are using docker toolbox. Docker toolbox uses docker machine. In Windows with docker toolbox, you are running under a virtualbox with its own IP, so localhost is not where your containers live. You will need to go 192.168.99.100:8081 to find your frontend.
As per the documentation on docker machine(https://docs.docker.com/machine/get-started/#run-containers-and-experiment-with-machine-commands):
$ docker-machine ip default
192.168.99.100
I am trying to build my airflow using docker and rabbitMQ. I am using rabbitmq:3-management image. And I am able to access rabbitMQ UI, and API.
In airflow I am building airflow webserver, airflow scheduler, airflow worker and airflow flower. Airflow.cfg file is used to config airflow.
Where I am using broker_url = amqp://user:password#127.0.0.1:5672/ and celery_result_backend = amqp://user:password#127.0.0.1:5672/
My docker compose file is as follows
version: '3'
services:
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
environment:
RABBITMQ_ERLANG_COOKIE: "SWQOKODSQALRPCLNMEQG"
RABBITMQ_DEFAULT_USER: "user"
RABBITMQ_DEFAULT_PASS: "password"
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- "5672:5672"
- "15672:15672"
labels:
NAME: "rabbitmq1"
webserver:
build: "airflow/"
hostname: "webserver"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "8080:8080"
depends_on:
- rabbit1
command: webserver
scheduler:
build: "airflow/"
hostname: "scheduler"
restart: always
environment:
- EXECUTOR=Celery
depends_on:
- webserver
- flower
- worker
command: scheduler
worker:
build: "airflow/"
hostname: "worker"
restart: always
depends_on:
- webserver
environment:
- EXECUTOR=Celery
command: worker
flower:
build: "airflow/"
hostname: "flower"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "5555:5555"
depends_on:
- rabbit1
- webserver
- worker
command: flower
I am able to build images using docker compose. However, I am not able to connect my airflow scheduler to rabbitMQ. I am getting following error:
consumer: Cannot connect to amqp://user:**#localhost:5672//: [Errno
111] Connection refused.
I have tried using 127.0.0.1 and localhost both.
What I am doing wrong ?
From within your airflow containers, you should be able to connect to the service rabbit1. So all you need to do is to change amqp://user:**#localhost:5672//: to amqp://user:**#rabbit1:5672//: and it should work.
Docker compose creates a default network and attaches services that do not explicitly define a network to it.
You do not need to expose the 5672 & 15672 ports on rabbit1 unless you want to be able to access it from outside the application.
Also, generally it is not recommended to build images inside docker-compose.
I solved this issue by installing rabbitMQ server into my system with command sudo apt install rabbitmq-server.