I am trying to combine my services using docker compose but my celery and celery beat services can't connect with my rabbitmq service.
Here is my rabbitmq service in the docker-compose.yml file
rabbitmq:
container_name: "rabbitmq"
image: rabbitmq:3-management
ports:
- 15672:15672
- 5672:5672
environment:
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
depends_on:
- server
volumes:
- rabbitmq:/var/lib/rabbitmq
and here are my celery worker and celery beat services in docker-compose.yml
celery_worker:
container_name: celery-worker
build: .
command: celery -A tasks worker -E --loglevel=INFO
environment:
host_server: postgresqldb
db_server_port: 5432
database_name: db
db_username: user
db_password: password
ssl_mode: prefer
networks:
- postgresqlnet
depends_on:
- rabbitmq
celery_beat:
container_name: celery-beat
build: .
command: celery -A tasks beat
environment:
- host_server=postgresqldb
- db_server_port=5432
- database_name=db
- db_username=user
- db_password=password
- ssl_mode=prefer
networks:
- postgresqlnet
depends_on:
- rabbitmq
I also have a celeryconfig.py where broker url is stored. It's content are below
broker_url = "amqp://guest:guest#localhost:5672//"
When I run docker compose up I get this output from celery and celery beat.
celery-worker | [2023-02-03 14:24:43,223: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
celery-worker | Trying again in 32.00 seconds... (16/100)
celery-worker |
celery-beat | [2023-02-03 14:25:10,058: ERROR/MainProcess] beat: Connection error: [Errno 111] Connection refused. Trying again in 32.0 seconds...
I realize what I did wrong now. First I needed to connect all three services using docker networking and then use the "rabbitmq" as my hostname name in my celeryconfig file.
Related
I've tried the 'with docker' docs here but it's not working from the localhost:7000, localhost:8081, or any other port I use. What am I missing?
REDIS_PORT=6379
### Redis ################################################
redis:
container_name: redis
hostname: redis
build: ./redis
volumes:
- ${DATA_PATH_HOST}/redis:/data
ports:
- "${REDIS_PORT}:6379"
networks:
- backend
### REDISCOMMANDER ################################################
redis-commander:
container_name: rediscommander
hostname: redis-commander
image: rediscommander/redis-commander:latest
restart: always
environment:
- REDIS_HOSTS=local:redis:6379
ports:
- "7000:80"
networks:
- frontend
- backend
depends_on:
- redis
Docker ps gives me:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
042c9a2e918a rediscommander/redis-commander:latest "/usr/bin/dumb-init …" About a minute ago Up About a minute (healthy) 8081/tcp, 0.0.0.0:7000->80/tcp rediscommander
86bc8c1ca5ff laradock_redis "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:6379->6379/tcp redis
Docker logs rediscommander gives me:
$ docker logs rediscommander
Creating custom redis-commander config '/redis-commander/config/local-production.json'.
Parsing 1 REDIS_HOSTS into custom redis-commander config '/redis-commander/config/local-production.json'.
node ./bin/redis-commander
Using scan instead of keys
No Save: false
listening on 0.0.0.0:8081
access with browser at http://127.0.0.1:8081
Redis Connection redis:6379 using Redis DB #0
Redis commader is listening on port 8081 in container. That is why you should change port binding to
ports:
- "7000:8081"
in redis commander block and access it via localhost:7000.
I am getting started with docker and docker-compose. I have the tutorials and I use docker-compose.yml file to run one of my sites in my local machine.
I can see my site running by going to http://localhost
My problem now is trying to run more than one site. If one of my sites is running and I try to run another site using docker-compose up -d I get the following error.
$ docker-compose up -d
Creating network "exampleCOM_default" with driver "bridge"
Creating exampleCOMphp-fpm ...
Creating exampleCOMmariadb ... error
ERROR: for exampleCOMmariadb Cannot start service db: driver failed programming external connectivity on endpoint exampleCOMmariadb (999572f33113c9fce034b4ed72aaCreating exampleCOMphp-fpm ... done
eady allocated
Creating exampleCOMnginx ... error
ERROR: for exampleCOMnginx Cannot start service nginx: driver failed programming external connectivity on endpoint exampleCOMnginx (9dc04f8b06825d7ff535afb1101933be7435c68f4350f845c756fc93e1a0322c): Bind for 0.0.0.0:443 failed: port is already allocated
ERROR: for db Cannot start service db: driver failed programming external connectivity on endpoint exampleCOMmariadb (999572f33113c9fce034b4ed72aa072708f6f477eb2af8ad614c0126ca457b64): Bind for 0.0.0.0:3306 failed: port is already allocated
ERROR: for nginx Cannot start service nginx: driver failed programming external connectivity on endpoint exampleCOMnginx (9dc04f8b06825d7ff535afb1101933be7435c68f4350f845c756fc93e1a0322c): Bind for 0.0.0.0:443 failed: port is already allocated
Encountered errors while bringing up the project.
This is my docker-compose file. I am using LEMP stack (PHP, NGINX, MARIADB)
version: '3'
services:
db:
container_name: ${SITE_NAME}_mariadb
build:
context: ./mariadb
volumes:
- ./mariadb/scripts:/docker-entrypoint-initdb.d
- ./.data/db:/var/lib/mysql
- ./logs/mariadb:/var/log/mysql
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
ports:
- '${MYSQL_PORT:-3306}:3306'
command:
'mysqld --innodb-flush-method=fsync'
networks:
- default
restart: always
nginx:
container_name: ${SITE_NAME}_nginx
build:
context: ./nginx
args:
- 'php-fpm'
- '9000'
volumes:
- ${APP_PATH}:/var/www/app
- ./logs/nginx/:/var/log/nginx
ports:
- "80:80"
- "443:443"
depends_on:
- php-fpm
networks:
- default
restart: always
php-fpm:
container_name: ${SITE_NAME}_php-fpm
build:
context: ./php7-fpm
args:
TIMEZONE: ${TIMEZONE}
volumes:
- ${APP_PATH}:/var/www/app
- ./php7-fpm/config/php.ini:/usr/local/etc/php/php.ini
environment:
DB_HOST: db
DB_PORT: 3306
DB_DATABASE: ${MYSQL_DATABASE}
DB_USERNAME: ${MYSQL_USER}
DB_PASSWORD: ${MYSQL_PASSWORD}
networks:
- default
restart: always
networks:
default:
driver: bridge
The host port you have mapped to is preventing you from starting another instance of the service even though the docker-compose creates a private network.
You can solve this problem by using random host ports assigned by docker-compose.
The ports entry in docker-compose is
ports
host_port:container_port
If you specify only the container port host port is randomly assigned. See here
You can provide the host_port values in ranges.
In below example, i've run the nginx containers and started multiple nginx containers that are automatically exposed to host ports based on the range values [30000-30005].
Command:
docker run -p 30000-30005:80 --name nginx1 -d nginx
Output:
9083d5fc97e0 nginx ... Up 2 seconds 0.0.0.0:30001->80/tcp nginx1
f2f9de1efd8c nginx ... Up 24 seconds 0.0.0.0:30000->80/tcp nginx
I am trying to build my airflow using docker and rabbitMQ. I am using rabbitmq:3-management image. And I am able to access rabbitMQ UI, and API.
In airflow I am building airflow webserver, airflow scheduler, airflow worker and airflow flower. Airflow.cfg file is used to config airflow.
Where I am using broker_url = amqp://user:password#127.0.0.1:5672/ and celery_result_backend = amqp://user:password#127.0.0.1:5672/
My docker compose file is as follows
version: '3'
services:
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
environment:
RABBITMQ_ERLANG_COOKIE: "SWQOKODSQALRPCLNMEQG"
RABBITMQ_DEFAULT_USER: "user"
RABBITMQ_DEFAULT_PASS: "password"
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- "5672:5672"
- "15672:15672"
labels:
NAME: "rabbitmq1"
webserver:
build: "airflow/"
hostname: "webserver"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "8080:8080"
depends_on:
- rabbit1
command: webserver
scheduler:
build: "airflow/"
hostname: "scheduler"
restart: always
environment:
- EXECUTOR=Celery
depends_on:
- webserver
- flower
- worker
command: scheduler
worker:
build: "airflow/"
hostname: "worker"
restart: always
depends_on:
- webserver
environment:
- EXECUTOR=Celery
command: worker
flower:
build: "airflow/"
hostname: "flower"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "5555:5555"
depends_on:
- rabbit1
- webserver
- worker
command: flower
I am able to build images using docker compose. However, I am not able to connect my airflow scheduler to rabbitMQ. I am getting following error:
consumer: Cannot connect to amqp://user:**#localhost:5672//: [Errno
111] Connection refused.
I have tried using 127.0.0.1 and localhost both.
What I am doing wrong ?
From within your airflow containers, you should be able to connect to the service rabbit1. So all you need to do is to change amqp://user:**#localhost:5672//: to amqp://user:**#rabbit1:5672//: and it should work.
Docker compose creates a default network and attaches services that do not explicitly define a network to it.
You do not need to expose the 5672 & 15672 ports on rabbit1 unless you want to be able to access it from outside the application.
Also, generally it is not recommended to build images inside docker-compose.
I solved this issue by installing rabbitMQ server into my system with command sudo apt install rabbitmq-server.
I have a problem with accessing a container from another container. My docker-compose.yml file is:
version: '2'
services:
consul:
image: "consul"
hostname: "consul"
command: "agent -server -client=0.0.0.0 -bind=10.30.1.134 -retry-join=10.30.1.97 -retry-join=10.30.1.42 -bootstrap-expect=3 -ui"
ports:
- "8400:8400"
- "8500:8500"
- "8300:8300"
- "8301:8301"
- "8302:8302"
- "8600:53/udp"
volumes:
- /data/consul:/consul/data
network_mode: host
vault:
depends_on:
- consul
image: "vault"
hostname: "vault"
links:
- consul
environment:
VAULT_ADDR: http://127.0.0.1:8200
ports:
- "8200:8200"
- "8201:8201"
volumes:
- ./tools/wait-for-it.sh:/wait-for-it.sh
- ./config/vault:/config
- ./config/vault/policies:/policies
entrypoint: /wait-for-it.sh -t 200 -h consul -p 8500 -s -- vault server -config=/config/with-consul.hcl
When I do docker-compose up I see errors
vault_1 | nc: bad address 'consul'
And also I am not able to ping the service:
root#ip-10-30-1-134:~# docker run vault ping consul
ping: bad address 'consul'
According to the documentation the contaners must be acccessible by the name defined i.e. consul
I have started a docker container with the following command
docker run --name mysql --restart always -p 3306:3306 -v /var/lib/mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=password -d mysql:5.7.14
and then would like to connect a wordpress site with the following docker-compose.yml file
version: '2'
services:
wordpress:
image: wordpress
external_links:
- mysql:mysql
ports:
- 80:80
environment:
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: password
volumes:
- /var/www/somesite.com:/var/www/html
But I keep getting the following error
Starting somesitecom_wordpress_1
Attaching to somesitecom_wordpress_1
wordpress_1 |
wordpress_1 | Warning: mysqli::mysqli(): (HY000/2002): Connection refused in - on line 19
wordpress_1 |
wordpress_1 | MySQL Connection Error: (2002) Connection refused
It seems like the external_links isn't working.
Any idea what I am doing wrong?
Your link is working, but you're on separate networks inside of Docker. From the docker-compose.yml docs:
Note: If you’re using the version 2 file format, the externally-created containers must be connected to at least one of the same networks as the service which is linking to them.
To solve this, you can create your own network:
docker network create dbnet
docker network connect dbnet mysql
Then configure your docker-compose.yml with:
version: '2'
networks:
dbnet:
external:
name: dbnet
services:
wordpress:
image: wordpress
ports:
- 80:80
environment:
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: password
volumes:
- /var/www/somesite.com:/var/www/html
networks:
- dbnet
Note with recent versions of Docker, you shouldn't need to link the containers, the DNS service should do the name resolution for you.