Port exposed with docker run but not docker-compose up - docker

I am trying to run rabbitmq along with influxdb TICK stack with docker-compose. When I run rabbitmq with this command:docker run -d --rm -p 5672:5672 -p 15672:15672 rabbitmq:3-management, both ports are open and I am able to access from a remote machine. However, when I run rabbitmq as part of a docker-compose file, it is not accessable from a remote machine. Here is my docker-compose.yml file:
version: "3.7"
services:
influxdb:
image: influxdb
volumes:
- ./influxdb/influxdb/data/:/var/lib/influxdb/
- ./influxdb/influxdb/config/:/etc/influxdb/
ports:
- "8086:8086"
rabbitmq:
image: rabbitmq:3-management
volumes:
- ./rabbitmq/data:/var/lib/rabbitmq
ports:
- "15672:15672"
- "5672:5627"
telegraf:
image: telegraf
volumes:
- ./influxdb/telegraf/config/:/etc/telegraf/
- /proc:/host/proc:ro
depends_on:
- "influxdb"
- "rabbitmq"
chronograf:
image: chronograf
volumes:
- ./influxdb/chronograf/data/:/var/lib/chronograf/
ports:
- "8888:8888"
depends_on:
- "telegraf"
More information: when I run this with docker-compose up -d the 8086 and 8888 are accessible from a remote machine (I confirm with using nmap command). Also, either way I am able to access the rabbitmq management console at http://localhost:15672.
How can I set this up so I can access rabbitmq from a remote machine using docker-compose?
Thank you.

Looks like just a typo in the port mapping in docker-compose.yml: 5672:5627 should actually be 5672:5672.
Otherwise the docker-compose configuration looks just fine.

Related

Docker Compose port forwarding works fine on MacOS but not on Linux

Having following docker compose script
version: '3.1'
services:
flowable-ui:
image: flowable/flowable-ui
container_name: flowable-ui
depends_on:
- flowable-db
environment:
- SERVER_PORT=8888
- SPRING_DATASOURCE_DRIVER-CLASS_NAME=org.postgresql.Driver
- SPRING_DATASOURCE_URL=jdbc:postgresql://flowable-db:5432/flowable
- SPRING_DATASOURCE_USERNAME=flowable
- SPRING_DATASOURCE_PASSWORD=flowable
ports:
- 80:8888
flowable-db:
image: postgres
container_name: flowable-db
environment:
- POSTGRES_PASSWORD=flowable
- POSTGRES_USER=flowable
- POSTGRES_DB=flowable
ports:
- 5432:5432
command: postgres
I can start with docker-compose up -d flowable image and it is accessible at http://localhost/flowable-ui in my browser.
Doing exactly the same on my Linux machine causes http://localhost/flowable-ui is not loading, I see that there is something there because the browser tries to access it, but it doesn't happen and I get timeout.
Do I have to set up something additionally on the Linux machine?
You're trying to port-forward from 8888 from your container to 80 to your host. On Linux, you'd need elevated permissions to open ports 1-1024.
Try a port >1024. For example
services:
flowable-ui:
...
ports:
- 8888:8888
and then access your app on http://localhost:8888.

Mapping ports in docker-compose file doesn't work. Network unreachable

I'm trying to map a port from my container, to a port on the host following the docs but it doesn't appear to be working.
After I run docker-compose -f development.yml up --force-recreate I get no errors. But if I try to reach the frontend service using localhost:8081 the network is unreachable.
I used docker inspect to view the IP and tried to ping that and still nothing.
Here is the docker-compose file I am using. And I doing anything wrong?
development.yml
version: '3'
services:
frontend:
image: nginx:latest
ports:
- "8081:80"
volumes:
- ./frontend/public:/var/www/html
api:
image: richarvey/nginx-php-fpm:latest
ports:
- "8080:80"
restart: always
volumes:
- ./api:/var/www/html
environment:
APPLICATION_ENV: development
ERRORS: 1
REMOVE_FILES: 0
links:
- db
- mq
db:
image: mariadb
restart: always
volumes:
- ./data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: dEvE10pMeNtMoDeBr0
mq:
image: rabbitmq:latest
restart: always
environment:
RABBITMQ_DEFAULT_USER: developer
RABBITMQ_DEFAULT_PASS: dEvE10pMeNtMoDeBr0
You are using docker toolbox. Docker toolbox uses docker machine. In Windows with docker toolbox, you are running under a virtualbox with its own IP, so localhost is not where your containers live. You will need to go 192.168.99.100:8081 to find your frontend.
As per the documentation on docker machine(https://docs.docker.com/machine/get-started/#run-containers-and-experiment-with-machine-commands):
$ docker-machine ip default
192.168.99.100

Docker service restart without order sequence when using docker-compose depends_on

I'm trying to setup a Sonarqube service in docker for windows and use mysql as database.
I am using below compose file, and using compose-file depends_on to control start up order:
db->sonarqube
But when docker services/windows restart, containers start up with no sequence.
Which will take error when sonarqube trying to connect mysql but mysql service didnt startup first.
version: '3'
services:
sonarqube:
image: sonarqube:6.5
container_name: sonarqube
restart: always
environment:
- SONARQUBE_JDBC_URL=jdbc:mysql://db:3306/sonar?useSSL=true&useUnicode=true&characterEncoding=utf8
- SONARQUBE_JDBC_USERNAME=sonar
- SONARQUBE_JDBC_PASSWORD=sonar
ports:
- 9000:9000
- 9002:9002
- 9092:9092
volumes:
- ../../volumes/data/sonarqube/conf:/opt/sonarqube/conf
- ../../volumes/data/sonarqube/data:/opt/sonarqube/data
- ../../volumes/data/sonarqube/extensions:/opt/sonarqube/extensions
- ../../volumes/data/sonarqube/lib/bundled-plugins:/opt/sonarqube/lib/bundled-plugins
depends_on:
- db
grafana:
container_name: grafana
image: grafana/grafana
ports:
- 3000:3000
volumes:
- ../../volumes/data/grafana-storage:/var/lib/grafana
restart: always
depends_on:
- db
# mysql service for sonarqube & grafana
db:
image: mysql:5.7
container_name: sonar-mysql
restart: always
environment:
- MYSQL_DATABASE=sonar
- MYSQL_USER=sonar
- MYSQL_PASSWORD=sonar
- MYSQL_ROOT_PASSWORD=Password1
- MAX_ALLOWED_PACKET=13421772800
volumes:
- ../../volumes/data/mysql:/var/lib/mysql
ports:
- 3306:3306
command: mysqld --max_allowed_packet=80M --federated --event_scheduler=1
After reading some of docker-compose official document,
They said the best way is to check the application code, like updating entrypoint scripts using waiting for other service online then start application.
It's the right way I think, but I still want to know if there is any way we can control containers startup sequence when docker service restart?
Thanks.

consumer: Cannot connect to amqp://user:**#localhost:5672//: [Errno 111] Connection refused

I am trying to build my airflow using docker and rabbitMQ. I am using rabbitmq:3-management image. And I am able to access rabbitMQ UI, and API.
In airflow I am building airflow webserver, airflow scheduler, airflow worker and airflow flower. Airflow.cfg file is used to config airflow.
Where I am using broker_url = amqp://user:password#127.0.0.1:5672/ and celery_result_backend = amqp://user:password#127.0.0.1:5672/
My docker compose file is as follows
version: '3'
services:
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
environment:
RABBITMQ_ERLANG_COOKIE: "SWQOKODSQALRPCLNMEQG"
RABBITMQ_DEFAULT_USER: "user"
RABBITMQ_DEFAULT_PASS: "password"
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- "5672:5672"
- "15672:15672"
labels:
NAME: "rabbitmq1"
webserver:
build: "airflow/"
hostname: "webserver"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "8080:8080"
depends_on:
- rabbit1
command: webserver
scheduler:
build: "airflow/"
hostname: "scheduler"
restart: always
environment:
- EXECUTOR=Celery
depends_on:
- webserver
- flower
- worker
command: scheduler
worker:
build: "airflow/"
hostname: "worker"
restart: always
depends_on:
- webserver
environment:
- EXECUTOR=Celery
command: worker
flower:
build: "airflow/"
hostname: "flower"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "5555:5555"
depends_on:
- rabbit1
- webserver
- worker
command: flower
I am able to build images using docker compose. However, I am not able to connect my airflow scheduler to rabbitMQ. I am getting following error:
consumer: Cannot connect to amqp://user:**#localhost:5672//: [Errno
111] Connection refused.
I have tried using 127.0.0.1 and localhost both.
What I am doing wrong ?
From within your airflow containers, you should be able to connect to the service rabbit1. So all you need to do is to change amqp://user:**#localhost:5672//: to amqp://user:**#rabbit1:5672//: and it should work.
Docker compose creates a default network and attaches services that do not explicitly define a network to it.
You do not need to expose the 5672 & 15672 ports on rabbit1 unless you want to be able to access it from outside the application.
Also, generally it is not recommended to build images inside docker-compose.
I solved this issue by installing rabbitMQ server into my system with command sudo apt install rabbitmq-server.

Debugging staging docker compose server with pycharm

I have the following docker-compose.yml file:
version: '2'
services:
postgis:
image: mdillon/postgis
environment:
POSTGRES_USER: ${POSTGIS_ENV_POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGIS_ENV_POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGIS_ENV_POSTGRES_DB}
volumes:
- /nexchange/database:/var/lib/postgresql/data
restart: always
app:
image: onitsoft/nexchange:${DOCKER_IMAGE_TAG}
volumes:
- /nexchange/mediafiles:/usr/share/nginx/html/media
- /nexchange/staticfiles:/usr/share/nginx/html/static
links:
- postgis
restart: always
web:
image: onitsoft/nginx
volumes:
- /nexchange/etc/letsencrypt:/etc/letsencrypt
- /nexchange/etc/nginx/ssl:/etc/nginx/ssl
- /nexchange/etc/nginx/nginx.conf:/etc/nginx/nginx.conf
- /nexchange/mediafiles:/usr/share/nginx/html/media
- /nexchange/staticfiles:/usr/share/nginx/html/static
ports:
- "80:80"
- "443:443"
links:
- app
restart: always
For some reason, some functionalities that work on the local container do not work on staging.
I would like to configure a remote interpreter in pycharm for staging, however it seems like this setup is not currently supported.
I am using wercker + docker compose and my IDE is pycharm.
EDIT:
The question is:
How to setup Pycharm debugger to run on a remote host running docker compose
The solution, however not secure, is open the docker API on the remote target to public traffic via iptables (possibly to traffic only from specific IP, if you posses a static IP).
$ ssh $USER#staging.nexchnage.ru
oleg#nexchange-staging:~# sudo iptables -A INPUT -p tcp --dport 2376 -j ACCEPT
oleg#nexchange-staging:~# sudo /etc/init.d/iptables restart
And then simply use the docker compose feature of JetBrain PyCharm / PhpStrom or you favourite choice:
Cheers

Resources