Using rabbitmq with docker in production - docker

I currently have a small server running in a docker container, the server uses RabbitMQ which is being run by docker-compose using the DockerHub image.
It is running nicely, but I'm worried that it may not be properly configured for production (production being a simple server, without clustering or anything fancy). In particular, I'm worried about the disk space limit described at RabbitMQ production checklist.
I'm not sure how to configure these things through docker-compose, as the env variables defined by the image seem to be quite limited.
My docker-compose file:
version: '3.4'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
ports:
- "5672:5672"
- "15672:15672"
volumes:
- rabbitmq:/var/lib/rabbitmq
restart: always
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=secretpassword
my-server:
# server config here
volumes:
rabbitmq:
networks:
server-network:
driver: bridge

disk_free_limit is set in /etc/rabbitmq/rabbitmq.conf, seems there is no environment available here.
So, you just need to override the rabbitmq.conf with your own one with docker bind mount volume to make your aim.
For your case, if you enter into the rabbitmq container, you can see:
shubuntu1#shubuntu1:~$ docker exec some-rabbit cat /etc/rabbitmq/rabbitmq.conf
loopback_users.guest = false
listeners.tcp.default = 5672
So you just need to add disk_free_limit.absolute = 1GB local rabbitmq.conf & mount it to container to override the default configure, full example as next:
rabbitmq.conf:
loopback_users.guest = false
listeners.tcp.default = 5672
disk_free_limit.absolute = 1GB
docker-compose.yaml:
version: '3.4'
services:
rabbitmq:
image: rabbitmq:3-management-alpine
ports:
- "5672:5672"
- "15672:15672"
volumes:
- rabbitmq:/var/lib/rabbitmq
- ./rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
volumes:
rabbitmq:
networks:
server-network:
driver: bridge
check if have effect now:
$ docker-compose up -d
$ docker-compose logs rabbitmq | grep "Disk free limit"
rabbitmq_1 | 2019-07-30 04:51:40.609 [info] <0.241.0> Disk free limit set to 1000MB
You can see disk free limit already set to 1GB.

Related

persistent data using map volume to kong container

I am trying to learn kong, using docker-compose, i am able to run kong+konga and create services. But whenever i do docker-compose down and then up again i lose all my data:
kong:
container_name: kong
image: kong:2.1.4-alpine
restart: unless-stopped
networks:
kong-net:
ipv4_address: 172.1.1.40
volumes:
- kong_data:/usr/local/kong/declarative
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-database
KONG_PG_USER: kong
KONG_PG_PASSWORD: password
KONG_ADMIN_LISTEN: "0.0.0.0:8001, 0.0.0.0:8444 ssl"
KONG_DB_UPDATE_FREQUENCY: 1m
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_ERROR_LOG: /dev/stderr
depends_on:
- kong-migration
ports:
- "8001:8001"
- "8444:8444"
- "8000:8000"
- "8443:8443"
Looks like volume mapping not working. pleasE help
If you want to keep data when your kong docker-compose is down it is better to use kong in database mode.
So then you will create a persistent volume for your database and it will keep your changes.
By the kong manual you will find there are two type of database supported: postgresql and cassandra
Postgresql is my choice for small project as I'm not planning for huge horizontal scale with cassandra database.
As you will find in the manual starting your project with docker and database is very simple.
But remember to add a volume to your database service as in the sample mentioned in manual there is no volume.
For postgresql you can add: -v /custom/mount:/var/lib/postgresql/data in docker run command
or
volumes:
postgress-data:
driver: local
services:
postgress:
restart: unless-stopped
image: postgres:latest
environment:
- POSTGRES_USER=your_db_user
- POSTGRES_DB=kong
- POSTGRES_PASSWORD=your_db_password
volumes:
- postgres-data:/var/lib/postgresql/data
Answer : You should use docker volume for having persistent data
As reference says :
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers
First step is to create a volume that you want your host and docker container communicate using :
docker volume create new-volume
Second step is to use that volume in a docker-compose (in your case)
A single docker compose service with a volume looks like this:
version: "3.9"
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
volumes:
myapp:
On the first invocation of docker-compose up the volume will be created. The same volume will be reused on following invocations.
A volume may be created directly outside of compose with docker volume create and then referenced inside docker-compose.yml as follows:
version: "3.9"
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
volumes:
myapp:
external: true

Docker communication inside docker compose and with database which is outside docker

I'm little bit confused with docker and network communication. I tried many things but it didn't work :-(.
I have following docker compose:
version: '3'
services:
nginx:
container_name: nginx
image: nginx:stable-alpine
restart: unless-stopped
tty: true
ports:
- 80:80
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
depends_on:
- app
networks:
- frontend
- backend
app:
restart: unless-stopped
tty: true
build:
context: .
dockerfile: Dockerfile
container_name: app
expose:
- "9090"
ports:
- 9090:9090
networks:
- backend
networks:
frontend:
backend:
And I would like to communicate:
From nginx to app //this probably works
From app to postgreSQL which is installed on server (no docker container)
I cannot do this, I tried many things but something is wrong :-(
You can choose any of these two options:
Make your postgresql listen to all your network interfaces (or the docker bridge for more secure but complex setup), to achieve that you need to make sure your config looks like this:
# grep listen /var/lib/pgsql/data/postgresql.conf
listen_addresses = '*'
Use host network mode in your docker compose, which runs docker in your host network name space instead of creating a new network:
network_mode: "host"

access from docker stack mode to local network host

I need make ftp connection to 192.168... network host (local network), and connection to mongo container.
Docker in swarm mode blocks network_mode:host (and I can't see remote ftp host inside container)
Docker stack has docs about --publish mode=host,target=80,published=8080, but I can't find out how write it in docker-compose file.
My docker-compose.yml file
version: '3'
services:
node:
image: tgbot-test_node_1
build:
context: ..
env_file: .env.test
network_mode: host
links:
- mongo # works
depends_on:
- mongo
deploy:
mongo:
image: mongo
network_mode: "bridge"
restart: on-failure
ports:
- 8080:80 # not works, only expose 27017/tcp
# not works
# - mode: host
# target: 27019
# published: 27017
env_file:
- .env.test
volumes:
- db:/data/db
deploy:
limits:
cpus: '0.75'
volumes:
db:
I need swarm mode for limiting resourses.
How can I access ftp host?
Docker version 19.03.12, build 48a66213fe
docker-compose version 1.26.2, build eefe0d31
UPD
with Joel Magnuson answer I got PORTS: 27017/tcp of mongo container. It not forward ports with stack deploy, any - would it be "80:80" or "27017"
I set
ports:
- 27018:27017
and got
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ab58c781fdb9 mongo:latest "docker-entrypoint.s…" 3 seconds ago Up 2 seconds 27017/tcp tgbot-test_mongo.1.3i7yps3saqo3nk4xxyk0eka7h
43c0e3cfe960 tgbot-test_node_1:latest "docker-entrypoint.s…" 3 seconds ago Up 3 seconds tgbot-test_node.1.v23cufsrr683gdg2bicgf80q2
I think this is just a configuration issue. You mentioned "FTP host" but you didn't mention about running an FTP server. Hopefully the below helps with your mongo database.
mongodb will always run on port 27017 inside the container by default unless configured, so you must mount the container's port of 27017 to the host, not port 80.
version: '3'
services:
node:
image: tgbot-test_node_1
env_file: .env.test # configure with mongodb://mongo:27017/<db name>
networks:
- tgbot-test
mongo:
image: mongo
ports:
- 27017:27017 # only needed if you want to access it outside of the stack
# otherwise it's always visible within the stack network as 'mongo'
volumes:
- /home/$USER/db:/data/db # can mount to host instead
networks:
- tgbot-test
networks:
tgbot-test:
driver: overlay #suggest overlay network
#volumes:
# db: # this is not persistent by itself - can mount to host
You could also create an external volume.
docker volume create --name tgbot-db
...
volumes:
tgbot-db:
external: true
You should be able to connect to the mongodb instance from the host or remote with mongodb://192.160.X.X:27017/<db name> or inside a container in the same stack using docker swarm's DNS name of mongo(service name) with mongodb://mongo:27017/<db name>.

Port exposed with docker run but not docker-compose up

I am trying to run rabbitmq along with influxdb TICK stack with docker-compose. When I run rabbitmq with this command:docker run -d --rm -p 5672:5672 -p 15672:15672 rabbitmq:3-management, both ports are open and I am able to access from a remote machine. However, when I run rabbitmq as part of a docker-compose file, it is not accessable from a remote machine. Here is my docker-compose.yml file:
version: "3.7"
services:
influxdb:
image: influxdb
volumes:
- ./influxdb/influxdb/data/:/var/lib/influxdb/
- ./influxdb/influxdb/config/:/etc/influxdb/
ports:
- "8086:8086"
rabbitmq:
image: rabbitmq:3-management
volumes:
- ./rabbitmq/data:/var/lib/rabbitmq
ports:
- "15672:15672"
- "5672:5627"
telegraf:
image: telegraf
volumes:
- ./influxdb/telegraf/config/:/etc/telegraf/
- /proc:/host/proc:ro
depends_on:
- "influxdb"
- "rabbitmq"
chronograf:
image: chronograf
volumes:
- ./influxdb/chronograf/data/:/var/lib/chronograf/
ports:
- "8888:8888"
depends_on:
- "telegraf"
More information: when I run this with docker-compose up -d the 8086 and 8888 are accessible from a remote machine (I confirm with using nmap command). Also, either way I am able to access the rabbitmq management console at http://localhost:15672.
How can I set this up so I can access rabbitmq from a remote machine using docker-compose?
Thank you.
Looks like just a typo in the port mapping in docker-compose.yml: 5672:5627 should actually be 5672:5672.
Otherwise the docker-compose configuration looks just fine.

Mapping ports in docker-compose file doesn't work. Network unreachable

I'm trying to map a port from my container, to a port on the host following the docs but it doesn't appear to be working.
After I run docker-compose -f development.yml up --force-recreate I get no errors. But if I try to reach the frontend service using localhost:8081 the network is unreachable.
I used docker inspect to view the IP and tried to ping that and still nothing.
Here is the docker-compose file I am using. And I doing anything wrong?
development.yml
version: '3'
services:
frontend:
image: nginx:latest
ports:
- "8081:80"
volumes:
- ./frontend/public:/var/www/html
api:
image: richarvey/nginx-php-fpm:latest
ports:
- "8080:80"
restart: always
volumes:
- ./api:/var/www/html
environment:
APPLICATION_ENV: development
ERRORS: 1
REMOVE_FILES: 0
links:
- db
- mq
db:
image: mariadb
restart: always
volumes:
- ./data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: dEvE10pMeNtMoDeBr0
mq:
image: rabbitmq:latest
restart: always
environment:
RABBITMQ_DEFAULT_USER: developer
RABBITMQ_DEFAULT_PASS: dEvE10pMeNtMoDeBr0
You are using docker toolbox. Docker toolbox uses docker machine. In Windows with docker toolbox, you are running under a virtualbox with its own IP, so localhost is not where your containers live. You will need to go 192.168.99.100:8081 to find your frontend.
As per the documentation on docker machine(https://docs.docker.com/machine/get-started/#run-containers-and-experiment-with-machine-commands):
$ docker-machine ip default
192.168.99.100

Resources