econnrefused 127.0.0.1:5672 Rabbit-mq with docker compose - docker

I am not able to connect a node.js app with rabbit-mq server. Postgres is correctly connected. I don't know why I have a connection refused.
version: "3"
networks:
app-tier:
driver: bridge
services:
db:
image: postgres
environment:
- POSTGRES_USER=dockerDBuser
- POSTGRES_PASSWORD=dockerDBpass
- POSTGRES_DB=performance
ports:
- "5433:5432"
volumes:
- ./pgdata:/var/lib/postgresql/data
networks:
- app-tier
rabbitmq:
image: rabbitmq:3.6.14-management
healthcheck:
test: ["CMD", "curl", "-f", "http://127.0.0.1:5672"]
interval: 30s
timeout: 10s
retries: 5
ports:
- "0.0.0.0:5672:5672"
- "0.0.0.0:15672:15672"
networks:
- app-tier
app:
build: .
depends_on:
- rabbitmq
- db
links:
- rabbitmq
- db
command: npm run startOrc
environment:
DATABASE_URL: postgres://dockerDBuser:dockerDBpass#db:5432/asdf
restart: on-failure
networks:
- app-tier
It seems it's trying to connect to the host rabbitmq instead of the container rabbitmq

Try changing env variable CLOUDAMQP_URL to amqp://rabbitmq:5672
You can call service by it's name i.e rabbitmq.

This error also comes up if you haven't started docker and run rabbitmq server. So if in case if someone who's reading this post gets the same error, please check whether your rabbitmq server is running.
You can use below command to run the rabbitmq server. (5672 is the port of that server)
docker run -p 5672:5672 rabbitmq

Related

Creating a docker container for postgresql with laravel sail

I created a docker container using the standard "image: postgres:13", but inside the container it doesn't start postgresql because there is no cluster. What could be the problem?
Thx for answers!
My docker-compose:
version: '3'
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.0
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.0/app
ports:
- '${APP_PORT:-80}:80'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- pgsql
pgsql:
image: 'postgres:13'
ports:
- '${FORWARD_DB_PORT:-5432}:5432'
environment:
PGPASSWORD: '${DB_PASSWORD:-secret}'
POSTGRES_DB: '${DB_DATABASE}'
POSTGRES_USER: '${DB_USERNAME}'
POSTGRES_PASSWORD: '${DB_PASSWORD:-secret}'
volumes:
- 'sailpgsql:/var/lib/postgresql/data'
networks:
- sail
healthcheck:
test: ["CMD", "pg_isready", "-q", "-d", "${DB_DATABASE}", "-U", "${DB_USERNAME}"]
retries: 3
timeout: 5s
networks:
sail:
driver: bridge
volumes:
sailpgsql:
driver: local
and I get an error when trying to contact the container:
SQLSTATE[08006] [7] could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
and inside the container, when I try to start or restart postgres, I get this message:
[warn] No PostgreSQL clusters exist; see "man pg_createcluster" ... (warning).
You should not connect through localhost but by the container name as host name.
So change your .env to contain
DB_CONNECTION=[what the name is in the config array]
DB_HOST=pgsql
DB_PORT=5432
DB_DATABASE=laravel
DB_USERNAME=[whatever you want]
DB_PASSWORD=[whatever you want]

Getting Segmentation Fault on running hyperledger/explorer in docker container

I am getting segmentation fault and docker exited with code 139 on running hyperledger-explorer docker image.
docker-compose file for creating explorer-db
version: "2.1"
volumes:
data:
walletstore:
pgadmin_4:
external: true
networks:
mynetwork.com:
external:
name: bikeblockchain_network
services:
explorerdb.mynetwork.com:
image: hyperledger/explorer-db:V1.0.0
container_name: explorerdb.mynetwork.com
hostname: explorerdb.mynetwork.com
restart: always
ports:
- 54320:5432
environment:
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWORD=password
healthcheck:
test: "pg_isready -h localhost -p 5432 -q -U postgres"
interval: 30s
timeout: 10s
retries: 5
volumes:
- data:/var/lib/postgresql/data
networks:
mynetwork.com:
aliases:
- postgresdb
pgadmin:
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: user#domain.com
PGADMIN_DEFAULT_PASSWORD: SuperSecret
PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION: "True"
# PGADMIN_CONFIG_LOGIN_BANNER: "Authorized Users Only!"
PGADMIN_CONFIG_CONSOLE_LOG_LEVEL: 10
volumes:
- "pgadmin_4:/var/lib/pgadmin"
ports:
- 8080:80
networks:
- mynetwork.com
docker-compose-explorer file
version: "2.1"
volumes:
data:
walletstore:
external: true
pgadmin_4:
external: true
networks:
mynetwork.com:
external:
name: bikeblockchain_network
services:
explorer.mynetwork.com:
image: hyperledger/explorer:V1.0.0
container_name: explorer.mynetwork.com
hostname: explorer.mynetwork.com
# restart: always
environment:
- DATABASE_HOST=xx.xxx.xxx.xxx
#Host is VM IP address with ports exposed for postgres. No issues here
- DATABASE_PORT=54320
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWD=password
- LOG_LEVEL_APP=debug
- LOG_LEVEL_DB=debug
- LOG_LEVEL_CONSOLE=info
# - LOG_CONSOLE_STDOUT=true
- DISCOVERY_AS_LOCALHOST=false
volumes:
- ./config.json:/opt/explorer/app/platform/fabric/config.json
- ./connection-profile:/opt/explorer/app/platform/fabric/connection-profile
- ./examples/net1/crypto:/tmp/crypto
- walletstore:/opt/wallet
- ./crypto-config/:/etc/data
command: sh -c "node /opt/explorer/main.js && tail -f /dev/null"
ports:
- 6060:6060
networks:
- mynetwork.com
error
Attaching to explorer.mynetwork.com
explorer.mynetwork.com | Segmentation fault
explorer.mynetwork.com exited with code 139
Postgres is working fine. Docker is updated to the latest version.
Fabric network being used is generated inside IBM Blockchain VS Code extension.
I too face the same problem with docker images but I was success on manual start.sh but not on the docker image. After some exploration, i came to know this is due to some architecture build related and there seem to be a segmentation fault issue in the latest v1.0.0 container image.
This get fixed it on the latest master branch, but not yet released it on Docker Hub.
Please build Explorer container image by yourself by using build_docker_image.sh on your local for the time being.
from hlf forum
Okay!! So I did some testings and found that if, the Docker is set to Run on Windows Login, Explorer will throw error of segmentation fault, but if, I manually start Docker after windows login, it works well. Strange !!

Connecting to RabbitMQ container with docker-compose

I want to run RabbitMQ in one container, and a worker process in another. The worker process needs to access RabbitMQ.
I'd like these to be managed through docker-compose.
This is my docker-compose.yml file so far:
version: "3"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- "5672"
- "15672"
worker:
build: ./worker
depends_on:
- rabbitmq
# Allow access to docker daemon
volumes:
- /var/run/docker.sock:/var/run/docker.sock
So I've exposed the RabbitMQ ports. The worker process accesses RabbitMQ using the following URL:
amqp://guest:guest#rabbitmq:5672/
Which is what they use in the official tutorial, but localhost has been swapped for rabbitmq, since the the containers should be discoverable with a hostname identical to the container name:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
Whenever I run this, I get an connection refused error:
Recreating ci_rabbitmq_1 ... done
Recreating ci_worker_1 ... done
Attaching to ci_rabbitmq_1, ci_worker_1
worker_1 | dial tcp 127.0.0.1:5672: connect: connection refused
ci_worker_1 exited with code 1
I find this interesting because it's using the IP 127.0.0.1 which (I think) is localhost, even though I specified rabbitmq as the hostname. I'm not an expert on docker networking, so maybe this is desired.
I'm happy to supply more information if needed!
Edit
There is an almost identical question here. I think I need to wait until rabbitmq is up and running before starting worker. I tried doing this with a healthcheck:
version: "2.1"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- "5672"
- "15672"
healthcheck:
test: [ "CMD", "nc", "-z", "localhost", "5672" ]
interval: 10s
timeout: 10s
retries: 5
worker:
build: .
depends_on:
rabbitmq:
condition: service_healthy
(Note the different version). This doesn't work, however - it will always fail as not-healthy.
Aha! I fixed it. #Ijaz was totally correct - the RabbitMQ service takes a while to start, and my worker tries to connect before it's running.
I tried using a delay, but this failed when the RabbitMQ took longer than usual.
This is also indicative of a larger architectural problem - what happens if the queuing service (RabbitMQ in my case) goes offline during production? Right now, my entire site fails. There needs to be some built-in redundancy and polling.
As described this this related answer, we can use healthchecks in docker-compose 3+:
version: "3"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- 5672
- 15672
healthcheck:
test: [ "CMD", "nc", "-z", "localhost", "5672" ]
interval: 5s
timeout: 15s
retries: 1
worker:
image: worker
restart: on-failure
depends_on:
- rabbitmq
Now, the worker container will restart a few times while the rabbitmq container stays unhealthy. rabbitmq immediately becomes healthy when nc -z localhost 5672 succeeds - i.e. when the queuing is live!
Here is the correct working example :
version: "3.8"
services:
rabbitmq:
image: rabbitmq:3.7.28-management
#container_name: rabbitmq
volumes:
- ./etc/rabbitmq/conf:/etc/rabbitmq/
- ./etc/rabbitmq/data/:/var/lib/rabbitmq/
- ./etc/rabbitmq/logs/:/var/log/rabbitmq/
environment:
RABBITMQ_ERLANG_COOKIE: ${RABBITMQ_ERLANG_COOKIE:-secret_cookie}
RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER:-admin}
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS:-admin}
ports:
- 5672:5672 #amqp
- 15672:15672 #http
- 15692:15692 #prometheus
healthcheck:
test: [ "CMD", "rabbitmqctl", "status"]
interval: 5s
timeout: 20s
retries: 5
mysql:
image: mysql
restart: always
volumes:
- ./etc/mysql/data:/var/lib/mysql
- ./etc/mysql/scripts:/docker-entrypoint-initdb.d
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: mysqldb
MYSQL_USER: ${MYSQL_DEFAULT_USER:-testuser}
MYSQL_PASSWORD: ${MYSQL_DEFAULT_PASSWORD:-testuser}
ports:
- "3306:3306"
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 20s
retries: 10
trigger-batch-process-job:
build: .
environment:
- RMQ_USER=${RABBITMQ_DEFAULT_USER:-admin}
- RMQ_PASS=${RABBITMQ_DEFAULT_PASS:-admin}
- RMQ_HOST=${RABBITMQ_DEFAULT_HOST:-rabbitmq}
- RMQ_PORT=${RABBITMQ_DEFAULT_PORT:-5672}
- DB_USER=${MYSQL_DEFAULT_USER:-testuser}
- DB_PASS=${MYSQL_DEFAULT_PASSWORD:-testuser}
- DB_SERVER=mysql
- DB_NAME=mysqldb
- DB_PORT=3306
depends_on:
mysql:
condition: service_healthy
rabbitmq:
condition: service_healthy
Maybe you dont need to expose/map the ports on the host if you are just accessing the service from another container.
From the documentation:
Expose Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port
can be specified.
expose:
- "3000"
- "8000"
So it should be like this:
version: "3"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- "5672"
- "15672"
worker:
build: ./worker
depends_on:
- rabbitmq
# Allow access to docker daemon
volumes:
- /var/run/docker.sock:/var/run/docker.sock
also make sure to connect to rabitmq only when its ready to server on port.
Most clean way for docker compose v3.8
version: "3.8"
services:
worker:
build: ./worker
rabbitmq:
condition: service_healthy
rabbitmq:
image: library/rabbitmq
ports:
- 5671:5671
- 5672:5672
healthcheck:
test: [ "CMD", "nc", "-z", "localhost", "5672" ]
interval: 5s
timeout: 10s
retries: 3

consumer: Cannot connect to amqp://user:**#localhost:5672//: [Errno 111] Connection refused

I am trying to build my airflow using docker and rabbitMQ. I am using rabbitmq:3-management image. And I am able to access rabbitMQ UI, and API.
In airflow I am building airflow webserver, airflow scheduler, airflow worker and airflow flower. Airflow.cfg file is used to config airflow.
Where I am using broker_url = amqp://user:password#127.0.0.1:5672/ and celery_result_backend = amqp://user:password#127.0.0.1:5672/
My docker compose file is as follows
version: '3'
services:
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
environment:
RABBITMQ_ERLANG_COOKIE: "SWQOKODSQALRPCLNMEQG"
RABBITMQ_DEFAULT_USER: "user"
RABBITMQ_DEFAULT_PASS: "password"
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- "5672:5672"
- "15672:15672"
labels:
NAME: "rabbitmq1"
webserver:
build: "airflow/"
hostname: "webserver"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "8080:8080"
depends_on:
- rabbit1
command: webserver
scheduler:
build: "airflow/"
hostname: "scheduler"
restart: always
environment:
- EXECUTOR=Celery
depends_on:
- webserver
- flower
- worker
command: scheduler
worker:
build: "airflow/"
hostname: "worker"
restart: always
depends_on:
- webserver
environment:
- EXECUTOR=Celery
command: worker
flower:
build: "airflow/"
hostname: "flower"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "5555:5555"
depends_on:
- rabbit1
- webserver
- worker
command: flower
I am able to build images using docker compose. However, I am not able to connect my airflow scheduler to rabbitMQ. I am getting following error:
consumer: Cannot connect to amqp://user:**#localhost:5672//: [Errno
111] Connection refused.
I have tried using 127.0.0.1 and localhost both.
What I am doing wrong ?
From within your airflow containers, you should be able to connect to the service rabbit1. So all you need to do is to change amqp://user:**#localhost:5672//: to amqp://user:**#rabbit1:5672//: and it should work.
Docker compose creates a default network and attaches services that do not explicitly define a network to it.
You do not need to expose the 5672 & 15672 ports on rabbit1 unless you want to be able to access it from outside the application.
Also, generally it is not recommended to build images inside docker-compose.
I solved this issue by installing rabbitMQ server into my system with command sudo apt install rabbitmq-server.

Docker (Compose) client connects to Kafka too early

I am trying to run Kafka with Docker and Docker Compose. This is the docker-compose.yml:
version: "2"
services:
zookeeper:
image: "wurstmeister/zookeeper"
ports:
- "2181:2181"
kafka:
build:
context: "./services/kafka"
dockerfile: "Dockerfile"
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: "0.0.0.0"
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
users:
build:
context: "./services/users"
dockerfile: "Dockerfile"
ports:
- "4001:4001"
environment:
NODE_ENV: "develop"
ZOOKEEPER_HOST: "zookeeper"
ZOOKEEPER_PORT: "2181"
volumes:
- "./services/users:/service"
The users service only tries to connect (using kafka-node in Node.js) and listens on a topic and publishes one message to it every time it is ran.
The problem is that I keep getting Connection Refused errors. I am using Dockerize to wait for the kafka port to be available in the Dockerfile with the line CMD dockerize -wait tcp://kafka:9092 node /service/index.js.
It waits for the port to be available before starting the users container and this system works, but it is not at the right time. It seems that Kafka is opening the 9092 port before it has elected a leader.
When I run Kafka first and let it start completely and then run my app, it runs smoothly.
How do I wait for the correct moment before starting my service?
Try the docker-compose version 2.1 or 3, as it includes an healthcheck directive.
See "Docker Compose wait for container X before starting Y" as an example.
You can:
depends_on:
kafka:
condition: service_healthy
And in kafka add:
healthcheck:
test: ["CMD", ...]
interval: 30s
timeout: 10s
retries: 5
with a curl command for instance which would test if kafka has elected a leader.
A full example; this is what I use in docker compose.
tldr; use a kafka healthcheck
["CMD", "kafka-topics.sh", "--list", "--zookeeper", "zookeeper:2181"]
integration test app depends on kafka
app depends on kafka
kafka depends on zookeeper
Since the integration test and the app are starting at the same time, I think this helps with total execution time.
Also, both are starting after kafka's healthcheck is passing.
version: '2.1'
services:
my-integration-tests:
image: golang:1.16
volumes:
- myapp:/app
command: go test -tags=integration -mod=vendor -cover -v --ginkgo.v --ginkgo.progress --ginkgo.failFast
depends_on:
kafka:
condition: service_healthy
my-app:
image: local/my-app
build:
context: .
depends_on:
kafka:
condition: service_healthy
zookeeper:
image: wurstmeister/zookeeper:3.4.6
expose:
- "2181"
tmpfs:
- /opt/zookeeper-3.4.6/data
kafka:
image: wurstmeister/kafka:latest
depends_on:
- zookeeper
expose:
- 9092
tmpfs:
- /kafka
environment:
KAFKA_ADVERTISED_LISTENERS: INSIDE://localhost:9094,OUTSIDE://kafka:9092
KAFKA_LISTENERS: INSIDE://:9094,OUTSIDE://:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
LOG4J_LOGGER_KAFKA_AUTHORIZER_LOGGER: DEBUG, authorizerAppender
healthcheck:
test: ["CMD", "kafka-topics.sh", "--list", "--zookeeper", "zookeeper:2181"]
interval: 5s
timeout: 10s
retries: 5

Resources