I want to run RabbitMQ in one container, and a worker process in another. The worker process needs to access RabbitMQ.
I'd like these to be managed through docker-compose.
This is my docker-compose.yml file so far:
version: "3"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- "5672"
- "15672"
worker:
build: ./worker
depends_on:
- rabbitmq
# Allow access to docker daemon
volumes:
- /var/run/docker.sock:/var/run/docker.sock
So I've exposed the RabbitMQ ports. The worker process accesses RabbitMQ using the following URL:
amqp://guest:guest#rabbitmq:5672/
Which is what they use in the official tutorial, but localhost has been swapped for rabbitmq, since the the containers should be discoverable with a hostname identical to the container name:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
Whenever I run this, I get an connection refused error:
Recreating ci_rabbitmq_1 ... done
Recreating ci_worker_1 ... done
Attaching to ci_rabbitmq_1, ci_worker_1
worker_1 | dial tcp 127.0.0.1:5672: connect: connection refused
ci_worker_1 exited with code 1
I find this interesting because it's using the IP 127.0.0.1 which (I think) is localhost, even though I specified rabbitmq as the hostname. I'm not an expert on docker networking, so maybe this is desired.
I'm happy to supply more information if needed!
Edit
There is an almost identical question here. I think I need to wait until rabbitmq is up and running before starting worker. I tried doing this with a healthcheck:
version: "2.1"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- "5672"
- "15672"
healthcheck:
test: [ "CMD", "nc", "-z", "localhost", "5672" ]
interval: 10s
timeout: 10s
retries: 5
worker:
build: .
depends_on:
rabbitmq:
condition: service_healthy
(Note the different version). This doesn't work, however - it will always fail as not-healthy.
Aha! I fixed it. #Ijaz was totally correct - the RabbitMQ service takes a while to start, and my worker tries to connect before it's running.
I tried using a delay, but this failed when the RabbitMQ took longer than usual.
This is also indicative of a larger architectural problem - what happens if the queuing service (RabbitMQ in my case) goes offline during production? Right now, my entire site fails. There needs to be some built-in redundancy and polling.
As described this this related answer, we can use healthchecks in docker-compose 3+:
version: "3"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- 5672
- 15672
healthcheck:
test: [ "CMD", "nc", "-z", "localhost", "5672" ]
interval: 5s
timeout: 15s
retries: 1
worker:
image: worker
restart: on-failure
depends_on:
- rabbitmq
Now, the worker container will restart a few times while the rabbitmq container stays unhealthy. rabbitmq immediately becomes healthy when nc -z localhost 5672 succeeds - i.e. when the queuing is live!
Here is the correct working example :
version: "3.8"
services:
rabbitmq:
image: rabbitmq:3.7.28-management
#container_name: rabbitmq
volumes:
- ./etc/rabbitmq/conf:/etc/rabbitmq/
- ./etc/rabbitmq/data/:/var/lib/rabbitmq/
- ./etc/rabbitmq/logs/:/var/log/rabbitmq/
environment:
RABBITMQ_ERLANG_COOKIE: ${RABBITMQ_ERLANG_COOKIE:-secret_cookie}
RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER:-admin}
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS:-admin}
ports:
- 5672:5672 #amqp
- 15672:15672 #http
- 15692:15692 #prometheus
healthcheck:
test: [ "CMD", "rabbitmqctl", "status"]
interval: 5s
timeout: 20s
retries: 5
mysql:
image: mysql
restart: always
volumes:
- ./etc/mysql/data:/var/lib/mysql
- ./etc/mysql/scripts:/docker-entrypoint-initdb.d
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: mysqldb
MYSQL_USER: ${MYSQL_DEFAULT_USER:-testuser}
MYSQL_PASSWORD: ${MYSQL_DEFAULT_PASSWORD:-testuser}
ports:
- "3306:3306"
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 20s
retries: 10
trigger-batch-process-job:
build: .
environment:
- RMQ_USER=${RABBITMQ_DEFAULT_USER:-admin}
- RMQ_PASS=${RABBITMQ_DEFAULT_PASS:-admin}
- RMQ_HOST=${RABBITMQ_DEFAULT_HOST:-rabbitmq}
- RMQ_PORT=${RABBITMQ_DEFAULT_PORT:-5672}
- DB_USER=${MYSQL_DEFAULT_USER:-testuser}
- DB_PASS=${MYSQL_DEFAULT_PASSWORD:-testuser}
- DB_SERVER=mysql
- DB_NAME=mysqldb
- DB_PORT=3306
depends_on:
mysql:
condition: service_healthy
rabbitmq:
condition: service_healthy
Maybe you dont need to expose/map the ports on the host if you are just accessing the service from another container.
From the documentation:
Expose Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port
can be specified.
expose:
- "3000"
- "8000"
So it should be like this:
version: "3"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- "5672"
- "15672"
worker:
build: ./worker
depends_on:
- rabbitmq
# Allow access to docker daemon
volumes:
- /var/run/docker.sock:/var/run/docker.sock
also make sure to connect to rabitmq only when its ready to server on port.
Most clean way for docker compose v3.8
version: "3.8"
services:
worker:
build: ./worker
rabbitmq:
condition: service_healthy
rabbitmq:
image: library/rabbitmq
ports:
- 5671:5671
- 5672:5672
healthcheck:
test: [ "CMD", "nc", "-z", "localhost", "5672" ]
interval: 5s
timeout: 10s
retries: 3
Related
I just learned how to use docker-compose and I'm having some problems dockerizing my php-magento project
My project looks like this
app (magento)
nginx
mysql
redis
I'm getting an error when I try to execute these lines or when I add redis connection to magento env
Dockerfile - app
Error - without redis
Error- with redis
But if I comment these lines, it work's fine and I can execute they after the container is up.
I imagine that it's someting with the container's nettwork , but it's just a guess and I already put depends on and ensure that app is running after db and redis
Can someone help?
Docker-compose:
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile-app
args:
...
volumes:
...
ports:
- 1000:80
healthcheck:
test: ["CMD","wait-for-it", "-h", "localhost", "-p", "80", "-t", "1", "-q"]
interval: 1m
timeout: 10s
retries: 3
start_period: 60s
environment:
...
#depends_on:
# - nginx
#entrypoint: ["sleep", "1200"]
nginx:
build:
context: .
dockerfile: Dockerfile-nginx
ports:
- "80:80"
restart: on-failure
volumes:
...
environment:
VIRTUAL_HOST: localhost
#entrypoint: ["sleep", "1200"]
redis:
image: redis
ports:
- "6379:6379"
volumes:
...
restart: always
database:
image: mysql:5.7
ports:
- 3306:3306
environment:
...
volumes:
...
volumes:
...
I am trying to design a docker-compose.yml file that will allow me to easily launch environments to develop inside of. Sometimes I would like to have 2 or more of these up at the same time but doing it naively I get ERROR: for pdb Cannot create container for service pdb: Conflict. The container name "/pdb" is already in use by container ... (even if they are on different stacks).
version: '3.4'
services:
pdb:
hostname: "pdb"
container_name: "pdb"
image: "postgres:latest"
(...other services...)
Is there a way to automatically name these in a distinguishable but systematic way? For example something like this:
version: '3.4'
services:
pdb:
hostname: "${stack_name}_pdb"
container_name: "${stack_name}_pdb"
image: "postgres:latest"
(...other services...)
EDIT: Apparently this is a somewhat service specific question so here is the complete compose file just in case...
version: '3.4'
services:
rmq:
hostname: "rmq"
container_name: "rmq"
image: "rabbitmq:latest"
networks:
- "fakenet"
ports:
- "5672:5672"
healthcheck:
test: "rabbitmq-diagnostics -q ping"
interval: 30s
timeout: 30s
retries: 3
pdb:
hostname: "pdb"
container_name: "pdb"
image: "postgres:latest"
networks:
- "fakenet"
ports:
- 5432:5432
environment:
POSTGRES_PASSWORD: ******
POSTGRES_USER: postgres
POSTGRES_DB: test_db
volumes:
- "./deploy/pdb:/docker-entrypoint-initdb.d"
- "./data/dbase:/var/lib/postgresql/data"
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
workenv:
hostname: "aiida"
container_name: "aiida"
image: "aiida_workenv:v0.1"
expose:
- "8888" # AiiDa Lab
- "8890" # Jupyter Lab
- "5000" # REST API
ports: # local:container
- 8888:8888
- 8890:8890
- 5000:5000
volumes:
- "./data/codes:/home/devuser/codes"
- "./data/aiida:/home/devuser/.aiida"
depends_on:
pdb:
condition: service_healthy
rmq:
condition: service_healthy
networks:
- "fakenet"
environment:
LC_ALL: "en_US.UTF-8"
LANG: "en_US.UTF-8"
PSQL_HOST: "pdb"
PSQL_PORT: "5432"
command: "tail -f /dev/null"
networks:
fakenet:
driver: bridge
Just don't manually set container_name: at all. Compose will automatically assign a name based on the current project name. Similarly, you don't usually need to set hostname: (RabbitMQ is one extremely specific exception, if you're using that).
If you do need to publish ports out of your Compose setup to be able to access them from the host system, the other obvious pitfall is that the first ports: number must be unique across the entire host. You can specify a single number for ports: to let Docker pick the host port, though you'll need to look it up later with docker-compose port.
version: '3.8'
services:
pdb:
image: "postgres:latest"
# no hostname: or ports:
app:
build: .
environment:
PGHOST: pdb
ports:
- 3000 # container-side port, Docker picks host port
docker-compose -p myname -d up
docker-compose -p myname port app 3000
I have airflow running locally on port 8080 with the following docker-compose.yaml:
version: '3.7'
services:
postgres:
image: postgres:9.6
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
logging:
options:
max-size: 10m
max-file: "3"
webserver:
image: puckel/docker-airflow:1.10.9
restart: always
depends_on:
- postgres
environment:
- LOAD_EX=y
- EXECUTOR=Local
logging:
options:
max-size: 10m
max-file: "3"
volumes:
- ./dags:/usr/local/airflow/dags
# Add this to have third party packages
- ./requirements.txt:/requirements.txt
# - ./plugins:/usr/local/airflow/plugins
ports:
- "8080:8080"
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
However I need the port 8080 for another process. I tried updating to both "8080:8081" and "8081:8081" but neither worked, server would not respond. "8080:8080", however, works like a charm. What am I missing here?
I think you missed the only correct option. The syntax for ports is:
{host : container}
so in your case
8081:8080
should technically work. Assuming of course that airflow runs on port 8080 and has that one exposed (which it seems according to the dockerfile).
You could change the HTTP port on the webserver command and update the Docker port mapping like this:
on the YAML:
...
command: webserver -p 9999
ports:
- "9999:9999"
Sometimes this comes handy if you need the container to start listening directly on a specific port and avoid port mappings in scenarios where you are also changing network configurations e.g. in the YAML network_mode: host.
This basically means you can use only command: webserver -p 9999 and get rid of the ports: YAML completely.
I am not able to connect a node.js app with rabbit-mq server. Postgres is correctly connected. I don't know why I have a connection refused.
version: "3"
networks:
app-tier:
driver: bridge
services:
db:
image: postgres
environment:
- POSTGRES_USER=dockerDBuser
- POSTGRES_PASSWORD=dockerDBpass
- POSTGRES_DB=performance
ports:
- "5433:5432"
volumes:
- ./pgdata:/var/lib/postgresql/data
networks:
- app-tier
rabbitmq:
image: rabbitmq:3.6.14-management
healthcheck:
test: ["CMD", "curl", "-f", "http://127.0.0.1:5672"]
interval: 30s
timeout: 10s
retries: 5
ports:
- "0.0.0.0:5672:5672"
- "0.0.0.0:15672:15672"
networks:
- app-tier
app:
build: .
depends_on:
- rabbitmq
- db
links:
- rabbitmq
- db
command: npm run startOrc
environment:
DATABASE_URL: postgres://dockerDBuser:dockerDBpass#db:5432/asdf
restart: on-failure
networks:
- app-tier
It seems it's trying to connect to the host rabbitmq instead of the container rabbitmq
Try changing env variable CLOUDAMQP_URL to amqp://rabbitmq:5672
You can call service by it's name i.e rabbitmq.
This error also comes up if you haven't started docker and run rabbitmq server. So if in case if someone who's reading this post gets the same error, please check whether your rabbitmq server is running.
You can use below command to run the rabbitmq server. (5672 is the port of that server)
docker run -p 5672:5672 rabbitmq
I am trying to run Kafka with Docker and Docker Compose. This is the docker-compose.yml:
version: "2"
services:
zookeeper:
image: "wurstmeister/zookeeper"
ports:
- "2181:2181"
kafka:
build:
context: "./services/kafka"
dockerfile: "Dockerfile"
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: "0.0.0.0"
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
users:
build:
context: "./services/users"
dockerfile: "Dockerfile"
ports:
- "4001:4001"
environment:
NODE_ENV: "develop"
ZOOKEEPER_HOST: "zookeeper"
ZOOKEEPER_PORT: "2181"
volumes:
- "./services/users:/service"
The users service only tries to connect (using kafka-node in Node.js) and listens on a topic and publishes one message to it every time it is ran.
The problem is that I keep getting Connection Refused errors. I am using Dockerize to wait for the kafka port to be available in the Dockerfile with the line CMD dockerize -wait tcp://kafka:9092 node /service/index.js.
It waits for the port to be available before starting the users container and this system works, but it is not at the right time. It seems that Kafka is opening the 9092 port before it has elected a leader.
When I run Kafka first and let it start completely and then run my app, it runs smoothly.
How do I wait for the correct moment before starting my service?
Try the docker-compose version 2.1 or 3, as it includes an healthcheck directive.
See "Docker Compose wait for container X before starting Y" as an example.
You can:
depends_on:
kafka:
condition: service_healthy
And in kafka add:
healthcheck:
test: ["CMD", ...]
interval: 30s
timeout: 10s
retries: 5
with a curl command for instance which would test if kafka has elected a leader.
A full example; this is what I use in docker compose.
tldr; use a kafka healthcheck
["CMD", "kafka-topics.sh", "--list", "--zookeeper", "zookeeper:2181"]
integration test app depends on kafka
app depends on kafka
kafka depends on zookeeper
Since the integration test and the app are starting at the same time, I think this helps with total execution time.
Also, both are starting after kafka's healthcheck is passing.
version: '2.1'
services:
my-integration-tests:
image: golang:1.16
volumes:
- myapp:/app
command: go test -tags=integration -mod=vendor -cover -v --ginkgo.v --ginkgo.progress --ginkgo.failFast
depends_on:
kafka:
condition: service_healthy
my-app:
image: local/my-app
build:
context: .
depends_on:
kafka:
condition: service_healthy
zookeeper:
image: wurstmeister/zookeeper:3.4.6
expose:
- "2181"
tmpfs:
- /opt/zookeeper-3.4.6/data
kafka:
image: wurstmeister/kafka:latest
depends_on:
- zookeeper
expose:
- 9092
tmpfs:
- /kafka
environment:
KAFKA_ADVERTISED_LISTENERS: INSIDE://localhost:9094,OUTSIDE://kafka:9092
KAFKA_LISTENERS: INSIDE://:9094,OUTSIDE://:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
LOG4J_LOGGER_KAFKA_AUTHORIZER_LOGGER: DEBUG, authorizerAppender
healthcheck:
test: ["CMD", "kafka-topics.sh", "--list", "--zookeeper", "zookeeper:2181"]
interval: 5s
timeout: 10s
retries: 5