RabbitMQ docker container persistence on Windows - docker

I've created a docker-compose.yml file and when trying to "up" it, I'm failing to have my RabbitMQ docker container persisting to my host volume. It's complaining about the erlang cookie file not being accessible by owner only.
Any help with this would be greatly appreciated.
EDIT
So I added the above volume binding and rabbitmq seems to be placing files into that directory when I do a docker-compose up. I then add 2 messages and I can see via the rabbitmq console that the 2 messages are sitting in the queue...but then when I perform a docker-compose down followed by a docker-compose up, expecting the 2 messages to still be there as the directory and files were created, but they aren't and the message count=0 :(.

Maybe it's trying to access some privileged user functions.
Try adding privileged: true section to your docker-compose service in yml and do docker-compose up again.
If it works and you prefer to do some privileges, only what RabbitMQ needs, replace privileged: true by capability section for adding or dropping privileges:
cap_add:
- ALL
- <WHAT_YOU_PREFER>
cap_drop:
- NET_ADMIN
- SYS_ADMIN
- <WHAT_YOU_PREFER>
For further information, please check Compose file documentation
EDIT:
In order to provide data persistency when containers fails, add volumes section to docker-compose.yml file
volumes: /your_host_dir_with_data:/destination_in_docker

Related

Docker container R/W permissions to access remote TrueNAS SMB share

I've been banging my head against the wall trying to sort out permissions issues when running a container that uses a remote SMB share for storing configuration files.
I found this post and answer but still can't seem to get things to work:
docker-add-network-drive-as-volume-on-windows
For the below YAML code, yes everything is formatted correctly. I just copied this from my reddit post and the indents are not showing correctly now.
My set-up is as follows:
Running Proxmox as my hypervisor with:
TrueNAS Scale as the NAS
Debian VM for hosting Docker
The TrueNAS VM has a single pool, with 1 dataset for SMB shares and 1 dataset for NFS shares (implemented for troubleshooting purposes)
I have credentials steve:steve (1000:1000) supersecurepassword with Full Control ACL permissions on the SMB share. I can access this share via windows and CLI and have all expected operations behaving as expected.
On the Debian host, I have created user steve:steve (1000:1000) with supersecurepassword.
I have been able to successfully mount and map the share within the debian host using cifs using:
//192.168.10.206/dockerdata /mnt/dockershare cifsuid=1000,gid=1000,vers=3.0,credentials=/root/.truenascreds 0 0
The credentials are:
username=steve
password=supersecurepassword
I can read/write from CLI through the mount point, view files, modify files, etc.
I have also successfully mounted & read/write the share with these additional options:
file_mode=0777,dir_mode=0777,noexec,nosuid,nosetuids,nodev
Now here's where I start having problems. I can create a container user docker compose, portainer (manual creation and stack for compose) but run into database errors as the container attempts to start.
version: "2.1"
services:
babybuddytestsmbmount:
image: lscr.io/linuxserver/babybuddy:latest
container_name: babybuddytestsmbmount
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- CSRF_TRUSTED_ORIGINS=http://127.0.0.1:8000,https://babybuddy.domain.com
ports:
- 1801:8000
restart: unless-stopped
volumes:
- /mnt/dockershare/babybuddy2:/config
Docker will create all folders and files, start the container but the webui will return a server 500 error. The logs indicate these database errors which results in a large number of exceptions:
sqlite3.OperationalError: database is locked
django.db.utils.OperationalError: database is locked
django.db.migrations.exceptions.MigrationSchemaMissing: Unable to create the django_migrations table (database is locked)
I also tried mounting the SMB share in a docker volume using the following:
version: "2.1"
services:
babybuddy:
image: lscr.io/linuxserver/babybuddy:latest
container_name: babybuddy
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- CSRF_TRUSTED_ORIGINS=http://127.0.0.1:8000,https://babybuddy.domain.com
ports:
- 1800:8000
restart: unless-stopped
volumes:
- dockerdata:/config
volumes:
dockerdata:
driver_opts:
type: "cifs"
o: "username=steve,password=supersecurepassword,uid=1000,gid=1000,file_mode=0777,dir_mode=0777,noexec,nosuid,nosetuids,nodev,vers=3.0"
device: "//192.168.10.206/dockerdata"
I have also tried this under options:
o: "username=steve,password=supersecurepassword,uid=1000,gid=1000,rw,vers=3.0"
Docker again is able to create the container, create & mount the volume, create all folders and files, but encouters the same DB errors indicated above.
I believe this is because the container is trying to access the SMB share as root, which TrueNAS does not permit. I have verified that all files and folders are under the correct ownership, and during troubleshooting have also stopped the container, recursively chown and chgrp the dataset to root:root, restarting the container and no dice. Changing the SMB credntials on the debian host to root results in a failure to connect.
Testing to ensure I didn't have a different issue causing problems, I was able to sucessfully start the container locally on the host as well as using a remote NFS share from the same TrueNAS VM.
I have also played with the dataset permissions, changing owners within TrueNAS, attempting permissions without ACL, etc.
Each of these variations was done with fresh dataset for SMB and a wipeout and recreation of docker as well as reinstall of debian.
Any help or suggestions would be greatly appreciated.
Edit: I also tried this with Ubuntu as the docker host and attempted to have docker run under the steve user to disastrous results.
I expected to be able to mount the SMB share on my TrueNAS system on my Debian docker host machine and encounter write errors in the database files that are part of the container. Local docker instances and NFS mounts work fine.

docker-compose: volume problem: path on host created but not populated by container

I have the following docker-compose:
version: '3.7'
services:
db:
image: bitnami/mongodb:5.0.6
volumes:
- "/app/local-data:/data/db"
env_file: ./db/.env
The problem is data does not persist between docker-compose up/down and docker does not seem to use /app/local-data even though it creates it.
When I run docker-compose, container starts and works naturally. The directory /app/local-data is created by docker, however Mongodb does not populate it, and no r/w error is being shown on console. This makes me thing a temporary volume is assigned to container instead.. But if that is true then why docker still creates /app/local-data and not using it?
Any ideas how can I debug this?
Docker directives like volumes: don't know anything about what's actually running in the image. That directive creates the specified host and container paths if required, and bind-mounts the host path into the container path. It's up to the application code to use that directory (or not).
If you look at the bitnami/mongodb Docker Hub page under "Persisting your database", the database is configured to store data in the /bitnami/mongodb directory inside the container, and that directory needs to be the second volumes: path. Also note the requirement that the data directory needs to be writable by user ID 1001, which may or may not exist on your host (there's no specific requirement to create it).
volumes:
- "/app/local-data:/bitnami/mongodb"
# ^^^^^^^^^^^^^^^^
sudo chown -R 1001 /app/local-data
sudo docker-compose up -d

Problems writing log to a shared docker volume

I have not been able to connect the containers of my app and promtail via volumes so that promtail can read it.
I have an app that creates log files (by log4j2 in java ) to a folder with the extension appXX.log, when I share volumes my app is not able to write this file.
Here is my docker-compose (I have skipped the loki/grafana containers).
My app writes fine in that path without the shared volume, so it must be somehow how docker manages the volumes. Any ideas what could be going on?
promtail:
image: grafana/promtail:2.4.1
volumes:
- "app-volume:/var/log/"
- "path/config.yml:/etc/promtail/config.yml"
command: -config.file=/etc/promtail/config.yml
app:
image: app/:latest
volumes:
- "app-volume:/opt/app/logs/"
command: ["/bin/sh","-c","java $${JAVA_ARGS} -jar app-*.jar"]
volumes:
app-volume:
On the other hand, I do not know if it is the correct way to log an application to promtail, I have seen that usually read directly the log of the container (which does not work for me, because it only works in docker-linux) and I can think of these other possibilities What would be the right one in case it is impossible by volumes?
other alternatives
Any idea is welcome, thanks !

How to configure RabbitMQ for message persistence in Docker swarm?

How can I configure RabbitMQ to retain messages on node restart in docker swarm?
I've marked the queues as durable and I'm setting the message's delivery mode to 2. I'm mounting /var/lib/rabbitmq/mnesia to a persistent volume. I've docker exec'd to verify that rabbitmq is indeed creating files in said folder, and all seems well. Everything works in my local machine using docker-compose.
However, when the container crashes, docker swarm creates a new one, and this one seems to initialize a new Mnesia database instead of using the old one. The database's name seems to be related to the container's id. It's just a single node, I'm not configuring any clustering.
I haven't changed anything in rabbitmq.conf, except for the cluster_name, since it seemed to be related to the folder created, but that didn't solve it.
Relevant section of the docker swarm configuration:
rabbitmq:
image: rabbitmq:3.9.11-management-alpine
networks:
- default
environment:
- RABBITMQ_DEFAULT_PASS=password
- RABBITMQ_ERLANG_COOKIE=cookie
- RABBITMQ_NODENAME=rabbit
volumes:
- rabbitmq:/var/lib/rabbitmq/mnesia
- rabbitmq-conf:/etc/rabbitmq
deploy:
placement:
constraints:
- node.hostname==foomachine

Is there a better way to avoid folder permission issues for docker containers launched from docker compose in manjaro?

Is there better way to avoid folder permission issues when a relative folder is being set in a docker compose file when using manjaro?
For instance, take the bitnami/elasticsearch:7.7.0 image as an example:
This image as an example will always throw the ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/bitnami/elasticsearch/data/nodes]; error.
I can get around in by:
create the data directory with sudo, followed by chmod 777
attaching a docker volume
But I am looking for a bit easier to manage solution, similar to the docker experience in Ubuntu and OSX which I do not have to first create a directory with root in order for folder mapping to work.
I have made sure that my user is in the docker group by following the post install instructions on docker docs. I have no permission issues when accessing docker info, or sock.
docker-compose.yml
version: '3.7'
services:
elasticsearch:
image: bitnami/elasticsearch:7.7.0
container_name: elasticsearch
ports:
- 9200:9200
networks:
- proxy
environment:
- ELASTICSEARCH_HEAP_SIZE=512m
volumes:
- ./data/:/bitnami/elasticsearch/data
- ./config/elasticsearch.yml:/opt/bitnami/elasticsearch/config/elasticsearch.yml
networks:
proxy:
external: true
I am hoping for a more seamless experience when using my compose files from git which works fine in other systems, but running into this permission issue on the data folder on manjaro.
I did check other posts on SO, some some are temporary, like disabling selinux, while other require running docker with the --privileged flag, but I am trying to do with from compose.
This has nothing to do with the Linux distribution but is a general problem with Docker and bind mounts. A bind mount is when you mount a directory of your host into a container. The problem is that the Docker daemon creates the directory under the user it runs with (root) and the UID/GIDs are mapped literally into the container.
Not that it is advisable to run as root, but depending on your requirements, the official Elasticsearch image (elasticsearch:7.7.0) runs as root and does not have this problem.
Another solution that would work for the bitnami image is to make the ./data directory owned by group root and group writable, since it appears the group of the Elasticsearch process is still root.
A third solution is to change the GID of the bitnami image to whatever group you had the data created with and make it group writable.

Resources