I am trying to build my airflow using docker and rabbitMQ. I am using rabbitmq:3-management image. And I am able to access rabbitMQ UI, and API.
In airflow I am building airflow webserver, airflow scheduler, airflow worker and airflow flower. Airflow.cfg file is used to config airflow.
Where I am using broker_url = amqp://user:password#127.0.0.1:5672/ and celery_result_backend = amqp://user:password#127.0.0.1:5672/
My docker compose file is as follows
version: '3'
services:
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
environment:
RABBITMQ_ERLANG_COOKIE: "SWQOKODSQALRPCLNMEQG"
RABBITMQ_DEFAULT_USER: "user"
RABBITMQ_DEFAULT_PASS: "password"
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- "5672:5672"
- "15672:15672"
labels:
NAME: "rabbitmq1"
webserver:
build: "airflow/"
hostname: "webserver"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "8080:8080"
depends_on:
- rabbit1
command: webserver
scheduler:
build: "airflow/"
hostname: "scheduler"
restart: always
environment:
- EXECUTOR=Celery
depends_on:
- webserver
- flower
- worker
command: scheduler
worker:
build: "airflow/"
hostname: "worker"
restart: always
depends_on:
- webserver
environment:
- EXECUTOR=Celery
command: worker
flower:
build: "airflow/"
hostname: "flower"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "5555:5555"
depends_on:
- rabbit1
- webserver
- worker
command: flower
I am able to build images using docker compose. However, I am not able to connect my airflow scheduler to rabbitMQ. I am getting following error:
consumer: Cannot connect to amqp://user:**#localhost:5672//: [Errno
111] Connection refused.
I have tried using 127.0.0.1 and localhost both.
What I am doing wrong ?
From within your airflow containers, you should be able to connect to the service rabbit1. So all you need to do is to change amqp://user:**#localhost:5672//: to amqp://user:**#rabbit1:5672//: and it should work.
Docker compose creates a default network and attaches services that do not explicitly define a network to it.
You do not need to expose the 5672 & 15672 ports on rabbit1 unless you want to be able to access it from outside the application.
Also, generally it is not recommended to build images inside docker-compose.
I solved this issue by installing rabbitMQ server into my system with command sudo apt install rabbitmq-server.
Related
I need to resolve a container name to the IP Address from the docker host.
The reason for this is, i need a container to run on the host network, but it must be also able to resolve the container "backend" which it connects also to. (The container must be send & receive multicast packets)
docker-compose.yml
version: "3"
services:
database:
image: mongo
container_name: database
hostname: database
ports:
- "27017:27017"
backend:
image: "project/backend:latest"
container_name: backend
hostname: backend
environment:
- NODE_ENV=production
- DATABASE_HOST=database
- UUID=5025f846-7587-11ed-9ca7-8b992b5e7dd3
ports:
- "8080:8080"
depends_on:
- database
tty: true
frontend:
image: "project/frontend:latest"
container_name: frontend
hostname: frontend
ports:
- "80:80"
- "443:443"
depends_on:
- backend
environment:
- BACKEND_HOST=backend
connector:
image: "project/connector:latest"
container_name: connector
hostname: connector
ports:
- "1900:1900/udp"
#expose:
# - "1900/udp"
environment:
- NODE_ENV=production
- BACKEND_HOST=backend
- STARTUP_DELAY=1500
depends_on:
- backend
network_mode: host
tty: true
How can i resolve the hostname "backend" via docker from the docker host?
dig backend #127.0.0.11 & dig backend #172.17.0.1 did not work.
A test with a docker ubuntu image & socat proves, that i can receive ssdp multicast packets:
docker run --net host -it --rm ubuntu
socat UDP4-RECVFROM:1900,ip-add-membership=239.255.255.250:0.0.0.0,fork -
The only problem i now have is the DNS/Container name resolution from the host (network).
TL;DR
The container "connector" must be on the host network,but also be able to resolve the container name "backend" to the docker internal IP Address.
NOTE: Perhaps this is better suited on superuser or similar?
I am launching containers via docker-compose, but 2 out of 3 containers are failing stating -:"exec user process caused "exec format error" "
The above error is caused while executing a file places at location /opt/whatsapp/bin/wait_on_postgres.sh, i need to add #!/bin/bash at top of this file.
Problem is, the container is exiting in no time so how to access this file to make necessary changes ??
Below is the docker-compose.yml i am using -:
version: '3'
volumes:
whatsappMedia:
driver: local
postgresData:
driver: local
services:
db:
image: postgres:10.6
command: "-p 3306 -N 500"
restart: always
environment:
POSTGRES_PASSWORD: testpass
POSTGRES_USER: root
expose:
- "33060"
ports:
- "33060:3306"
volumes:
- postgresData:/var/lib/postgresql/data
network_mode: bridge
wacore:
image: docker.whatsapp.biz/coreapp:v${WA_API_VERSION:?Run docker-compose with env var WA_API_VERSION (ex. WA_API_VERSION=2.31.4 docker-compose <command> <options>)}
command: ["/opt/whatsapp/bin/wait_on_postgres.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
volumes:
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
environment:
# This is the version of the docker templates being used to run WhatsApp Business API
WA_RUNNING_ENV_VERSION: v2.2.3
ORCHESTRATION: DOCKER-COMPOSE
depends_on:
- "db"
network_mode: bridge
links:
- db
waweb:
image: docker.whatsapp.biz/web:v${WA_API_VERSION:?Run docker-compose with env var WA_API_VERSION (ex. WA_API_VERSION=2.31.4 docker-compose <command> <options>)}
command: ["/opt/whatsapp/bin/wait_on_postgres.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
ports:
- "9090:443"
volumes:
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
environment:
WACORE_HOSTNAME: wacore
# This is the version of the docker templates being used to run WhatsApp Business API
WA_RUNNING_ENV_VERSION: v2.2.3
ORCHESTRATION: DOCKER-COMPOSE
depends_on:
- "db"
- "wacore"
links:
- db
- wacore
network_mode: bridge
Problem got resolved by using 64bit guest OS image.
I was running this container over 32 bit Centos which was causing the error.
I am trying to run rabbitmq along with influxdb TICK stack with docker-compose. When I run rabbitmq with this command:docker run -d --rm -p 5672:5672 -p 15672:15672 rabbitmq:3-management, both ports are open and I am able to access from a remote machine. However, when I run rabbitmq as part of a docker-compose file, it is not accessable from a remote machine. Here is my docker-compose.yml file:
version: "3.7"
services:
influxdb:
image: influxdb
volumes:
- ./influxdb/influxdb/data/:/var/lib/influxdb/
- ./influxdb/influxdb/config/:/etc/influxdb/
ports:
- "8086:8086"
rabbitmq:
image: rabbitmq:3-management
volumes:
- ./rabbitmq/data:/var/lib/rabbitmq
ports:
- "15672:15672"
- "5672:5627"
telegraf:
image: telegraf
volumes:
- ./influxdb/telegraf/config/:/etc/telegraf/
- /proc:/host/proc:ro
depends_on:
- "influxdb"
- "rabbitmq"
chronograf:
image: chronograf
volumes:
- ./influxdb/chronograf/data/:/var/lib/chronograf/
ports:
- "8888:8888"
depends_on:
- "telegraf"
More information: when I run this with docker-compose up -d the 8086 and 8888 are accessible from a remote machine (I confirm with using nmap command). Also, either way I am able to access the rabbitmq management console at http://localhost:15672.
How can I set this up so I can access rabbitmq from a remote machine using docker-compose?
Thank you.
Looks like just a typo in the port mapping in docker-compose.yml: 5672:5627 should actually be 5672:5672.
Otherwise the docker-compose configuration looks just fine.
I am getting started with docker and docker-compose. I have the tutorials and I use docker-compose.yml file to run one of my sites in my local machine.
I can see my site running by going to http://localhost
My problem now is trying to run more than one site. If one of my sites is running and I try to run another site using docker-compose up -d I get the following error.
$ docker-compose up -d
Creating network "exampleCOM_default" with driver "bridge"
Creating exampleCOMphp-fpm ...
Creating exampleCOMmariadb ... error
ERROR: for exampleCOMmariadb Cannot start service db: driver failed programming external connectivity on endpoint exampleCOMmariadb (999572f33113c9fce034b4ed72aaCreating exampleCOMphp-fpm ... done
eady allocated
Creating exampleCOMnginx ... error
ERROR: for exampleCOMnginx Cannot start service nginx: driver failed programming external connectivity on endpoint exampleCOMnginx (9dc04f8b06825d7ff535afb1101933be7435c68f4350f845c756fc93e1a0322c): Bind for 0.0.0.0:443 failed: port is already allocated
ERROR: for db Cannot start service db: driver failed programming external connectivity on endpoint exampleCOMmariadb (999572f33113c9fce034b4ed72aa072708f6f477eb2af8ad614c0126ca457b64): Bind for 0.0.0.0:3306 failed: port is already allocated
ERROR: for nginx Cannot start service nginx: driver failed programming external connectivity on endpoint exampleCOMnginx (9dc04f8b06825d7ff535afb1101933be7435c68f4350f845c756fc93e1a0322c): Bind for 0.0.0.0:443 failed: port is already allocated
Encountered errors while bringing up the project.
This is my docker-compose file. I am using LEMP stack (PHP, NGINX, MARIADB)
version: '3'
services:
db:
container_name: ${SITE_NAME}_mariadb
build:
context: ./mariadb
volumes:
- ./mariadb/scripts:/docker-entrypoint-initdb.d
- ./.data/db:/var/lib/mysql
- ./logs/mariadb:/var/log/mysql
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
ports:
- '${MYSQL_PORT:-3306}:3306'
command:
'mysqld --innodb-flush-method=fsync'
networks:
- default
restart: always
nginx:
container_name: ${SITE_NAME}_nginx
build:
context: ./nginx
args:
- 'php-fpm'
- '9000'
volumes:
- ${APP_PATH}:/var/www/app
- ./logs/nginx/:/var/log/nginx
ports:
- "80:80"
- "443:443"
depends_on:
- php-fpm
networks:
- default
restart: always
php-fpm:
container_name: ${SITE_NAME}_php-fpm
build:
context: ./php7-fpm
args:
TIMEZONE: ${TIMEZONE}
volumes:
- ${APP_PATH}:/var/www/app
- ./php7-fpm/config/php.ini:/usr/local/etc/php/php.ini
environment:
DB_HOST: db
DB_PORT: 3306
DB_DATABASE: ${MYSQL_DATABASE}
DB_USERNAME: ${MYSQL_USER}
DB_PASSWORD: ${MYSQL_PASSWORD}
networks:
- default
restart: always
networks:
default:
driver: bridge
The host port you have mapped to is preventing you from starting another instance of the service even though the docker-compose creates a private network.
You can solve this problem by using random host ports assigned by docker-compose.
The ports entry in docker-compose is
ports
host_port:container_port
If you specify only the container port host port is randomly assigned. See here
You can provide the host_port values in ranges.
In below example, i've run the nginx containers and started multiple nginx containers that are automatically exposed to host ports based on the range values [30000-30005].
Command:
docker run -p 30000-30005:80 --name nginx1 -d nginx
Output:
9083d5fc97e0 nginx ... Up 2 seconds 0.0.0.0:30001->80/tcp nginx1
f2f9de1efd8c nginx ... Up 24 seconds 0.0.0.0:30000->80/tcp nginx
I'm trying to map a port from my container, to a port on the host following the docs but it doesn't appear to be working.
After I run docker-compose -f development.yml up --force-recreate I get no errors. But if I try to reach the frontend service using localhost:8081 the network is unreachable.
I used docker inspect to view the IP and tried to ping that and still nothing.
Here is the docker-compose file I am using. And I doing anything wrong?
development.yml
version: '3'
services:
frontend:
image: nginx:latest
ports:
- "8081:80"
volumes:
- ./frontend/public:/var/www/html
api:
image: richarvey/nginx-php-fpm:latest
ports:
- "8080:80"
restart: always
volumes:
- ./api:/var/www/html
environment:
APPLICATION_ENV: development
ERRORS: 1
REMOVE_FILES: 0
links:
- db
- mq
db:
image: mariadb
restart: always
volumes:
- ./data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: dEvE10pMeNtMoDeBr0
mq:
image: rabbitmq:latest
restart: always
environment:
RABBITMQ_DEFAULT_USER: developer
RABBITMQ_DEFAULT_PASS: dEvE10pMeNtMoDeBr0
You are using docker toolbox. Docker toolbox uses docker machine. In Windows with docker toolbox, you are running under a virtualbox with its own IP, so localhost is not where your containers live. You will need to go 192.168.99.100:8081 to find your frontend.
As per the documentation on docker machine(https://docs.docker.com/machine/get-started/#run-containers-and-experiment-with-machine-commands):
$ docker-machine ip default
192.168.99.100