Is there a better way to avoid folder permission issues for docker containers launched from docker compose in manjaro? - docker

Is there better way to avoid folder permission issues when a relative folder is being set in a docker compose file when using manjaro?
For instance, take the bitnami/elasticsearch:7.7.0 image as an example:
This image as an example will always throw the ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/bitnami/elasticsearch/data/nodes]; error.
I can get around in by:
create the data directory with sudo, followed by chmod 777
attaching a docker volume
But I am looking for a bit easier to manage solution, similar to the docker experience in Ubuntu and OSX which I do not have to first create a directory with root in order for folder mapping to work.
I have made sure that my user is in the docker group by following the post install instructions on docker docs. I have no permission issues when accessing docker info, or sock.
docker-compose.yml
version: '3.7'
services:
elasticsearch:
image: bitnami/elasticsearch:7.7.0
container_name: elasticsearch
ports:
- 9200:9200
networks:
- proxy
environment:
- ELASTICSEARCH_HEAP_SIZE=512m
volumes:
- ./data/:/bitnami/elasticsearch/data
- ./config/elasticsearch.yml:/opt/bitnami/elasticsearch/config/elasticsearch.yml
networks:
proxy:
external: true
I am hoping for a more seamless experience when using my compose files from git which works fine in other systems, but running into this permission issue on the data folder on manjaro.
I did check other posts on SO, some some are temporary, like disabling selinux, while other require running docker with the --privileged flag, but I am trying to do with from compose.

This has nothing to do with the Linux distribution but is a general problem with Docker and bind mounts. A bind mount is when you mount a directory of your host into a container. The problem is that the Docker daemon creates the directory under the user it runs with (root) and the UID/GIDs are mapped literally into the container.
Not that it is advisable to run as root, but depending on your requirements, the official Elasticsearch image (elasticsearch:7.7.0) runs as root and does not have this problem.
Another solution that would work for the bitnami image is to make the ./data directory owned by group root and group writable, since it appears the group of the Elasticsearch process is still root.
A third solution is to change the GID of the bitnami image to whatever group you had the data created with and make it group writable.

Related

Docker container R/W permissions to access remote TrueNAS SMB share

I've been banging my head against the wall trying to sort out permissions issues when running a container that uses a remote SMB share for storing configuration files.
I found this post and answer but still can't seem to get things to work:
docker-add-network-drive-as-volume-on-windows
For the below YAML code, yes everything is formatted correctly. I just copied this from my reddit post and the indents are not showing correctly now.
My set-up is as follows:
Running Proxmox as my hypervisor with:
TrueNAS Scale as the NAS
Debian VM for hosting Docker
The TrueNAS VM has a single pool, with 1 dataset for SMB shares and 1 dataset for NFS shares (implemented for troubleshooting purposes)
I have credentials steve:steve (1000:1000) supersecurepassword with Full Control ACL permissions on the SMB share. I can access this share via windows and CLI and have all expected operations behaving as expected.
On the Debian host, I have created user steve:steve (1000:1000) with supersecurepassword.
I have been able to successfully mount and map the share within the debian host using cifs using:
//192.168.10.206/dockerdata /mnt/dockershare cifsuid=1000,gid=1000,vers=3.0,credentials=/root/.truenascreds 0 0
The credentials are:
username=steve
password=supersecurepassword
I can read/write from CLI through the mount point, view files, modify files, etc.
I have also successfully mounted & read/write the share with these additional options:
file_mode=0777,dir_mode=0777,noexec,nosuid,nosetuids,nodev
Now here's where I start having problems. I can create a container user docker compose, portainer (manual creation and stack for compose) but run into database errors as the container attempts to start.
version: "2.1"
services:
babybuddytestsmbmount:
image: lscr.io/linuxserver/babybuddy:latest
container_name: babybuddytestsmbmount
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- CSRF_TRUSTED_ORIGINS=http://127.0.0.1:8000,https://babybuddy.domain.com
ports:
- 1801:8000
restart: unless-stopped
volumes:
- /mnt/dockershare/babybuddy2:/config
Docker will create all folders and files, start the container but the webui will return a server 500 error. The logs indicate these database errors which results in a large number of exceptions:
sqlite3.OperationalError: database is locked
django.db.utils.OperationalError: database is locked
django.db.migrations.exceptions.MigrationSchemaMissing: Unable to create the django_migrations table (database is locked)
I also tried mounting the SMB share in a docker volume using the following:
version: "2.1"
services:
babybuddy:
image: lscr.io/linuxserver/babybuddy:latest
container_name: babybuddy
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- CSRF_TRUSTED_ORIGINS=http://127.0.0.1:8000,https://babybuddy.domain.com
ports:
- 1800:8000
restart: unless-stopped
volumes:
- dockerdata:/config
volumes:
dockerdata:
driver_opts:
type: "cifs"
o: "username=steve,password=supersecurepassword,uid=1000,gid=1000,file_mode=0777,dir_mode=0777,noexec,nosuid,nosetuids,nodev,vers=3.0"
device: "//192.168.10.206/dockerdata"
I have also tried this under options:
o: "username=steve,password=supersecurepassword,uid=1000,gid=1000,rw,vers=3.0"
Docker again is able to create the container, create & mount the volume, create all folders and files, but encouters the same DB errors indicated above.
I believe this is because the container is trying to access the SMB share as root, which TrueNAS does not permit. I have verified that all files and folders are under the correct ownership, and during troubleshooting have also stopped the container, recursively chown and chgrp the dataset to root:root, restarting the container and no dice. Changing the SMB credntials on the debian host to root results in a failure to connect.
Testing to ensure I didn't have a different issue causing problems, I was able to sucessfully start the container locally on the host as well as using a remote NFS share from the same TrueNAS VM.
I have also played with the dataset permissions, changing owners within TrueNAS, attempting permissions without ACL, etc.
Each of these variations was done with fresh dataset for SMB and a wipeout and recreation of docker as well as reinstall of debian.
Any help or suggestions would be greatly appreciated.
Edit: I also tried this with Ubuntu as the docker host and attempted to have docker run under the steve user to disastrous results.
I expected to be able to mount the SMB share on my TrueNAS system on my Debian docker host machine and encounter write errors in the database files that are part of the container. Local docker instances and NFS mounts work fine.

docker-compose: volume problem: path on host created but not populated by container

I have the following docker-compose:
version: '3.7'
services:
db:
image: bitnami/mongodb:5.0.6
volumes:
- "/app/local-data:/data/db"
env_file: ./db/.env
The problem is data does not persist between docker-compose up/down and docker does not seem to use /app/local-data even though it creates it.
When I run docker-compose, container starts and works naturally. The directory /app/local-data is created by docker, however Mongodb does not populate it, and no r/w error is being shown on console. This makes me thing a temporary volume is assigned to container instead.. But if that is true then why docker still creates /app/local-data and not using it?
Any ideas how can I debug this?
Docker directives like volumes: don't know anything about what's actually running in the image. That directive creates the specified host and container paths if required, and bind-mounts the host path into the container path. It's up to the application code to use that directory (or not).
If you look at the bitnami/mongodb Docker Hub page under "Persisting your database", the database is configured to store data in the /bitnami/mongodb directory inside the container, and that directory needs to be the second volumes: path. Also note the requirement that the data directory needs to be writable by user ID 1001, which may or may not exist on your host (there's no specific requirement to create it).
volumes:
- "/app/local-data:/bitnami/mongodb"
# ^^^^^^^^^^^^^^^^
sudo chown -R 1001 /app/local-data
sudo docker-compose up -d

Docker bind propagation mount error "is not a shared mount"

I am trying to mount a FUSE virtual filesystem from inside a Docker container and expose the mount point to the host.
Docker is installed via snap on Ubuntu 20.04
The software is a fresh install of Seafile (a Dropbox alternative), but this problem I believe is more related to Docker, snap, and mounting file systems on Ubuntu. For what it's worth, I was following the official instructions here.
Inside the container (when it runs successfully), a script mounts a FUSE virtual filesystem to /seafile-fuse that makes the all files stored within Seafile visible.
docker-compose.yml exerpt:
version: '3.3'
services:
seafile:
image: seafileltd/seafile-mc:latest
container_name: seafile
volumes:
- /home/jonathan/seafile/seafile-data:/shared
- type: bind
source: /home/jonathan/seafile/seafile-fuse
target: /seafile-fuse
bind:
propagation: rshared
privileged: true
cap_add:
- SYS_ADMIN
This leads to:
ERROR: for seafile Cannot start service seafile: path /home/jonathan/seafile/seafile-fuse is mounted on /home but it is not a shared mount
I found this somewhat related answer which hints that the issue may to do with the docker daemon running in a different namespace. But I am unable to get his solution to work.
What do I need to do to connect the host directory /home/jonathan/seafile/seafile-fuse so that it sees the container directory /seafile-fuse?
Bonus question...
Given that this is to be an internet facing home-server, is it necessary this this becomes a privileged container? Are there better options?
Thanks!!

Mounted directory empty with docker-compose and custom Dockerfile

I am very (read very) new to Docker so experimenting. I have created a very basic Dockerfile to pull in Laravel:
FROM composer:latest
RUN composer_version="$(composer --version)" && echo $composer_version
RUN composer global require laravel/installer
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel site
My docker-compose.yml file looks like:
version: '3.7'
services:
laravel:
build:
context: .
dockerfile: laravel.dockerfile
container_name: my_laravel
network_mode: host
restart: on-failure
volumes:
- ./site:/var/www/site
When I run docker-compose up, the ./site directory is created but the contents are empty. I've put this in docker-compose as I plan on on including other things like nginx, mysql, php etc
The command:
docker run -v "/where/i/want/data/site:/var/www/site" my_laravel
Results in the same behaviour.
I know the install is successful as I modified my dockerfile with the follwing two lines appended to it:
WORKDIR /var/www/site
RUN ls -la
Which gives me the correct listing.
Clearly misunderstanding something here. Any help appreciated.
EDIT: So, I was able to get this to work... although, it slightly more difficult than just specifying a path..
You can accomplish this by specifying a volume in docker-compose.yml.. The path to the directory (on the host) is labeled as device in the compose file.. It appears that the root of the path has to be an actual volume (possibly a share would work) but the 'destination' of the path can be a directory on the specified volume..
I created a new volume called docker on my machine but I suppose you could do this with your existing disk/volume..
I am on a Mac and this docker-compose.yml file worked for me:
version: '3.7'
services:
nodemon-test:
container_name: my-nodemon-test
image: oze4/nodemon-docker-test
ports:
- "1337:1337"
volumes:
- docker_test_app:/app # see comment below on which name to use here
volumes:
docker_test_app: # use this name under `volumes:` for the service
name: docker_test_app
driver: local
driver_opts:
o: bind
type: none
device: /Volumes/docker/docker_test_app
The container specified exists in my DockerHub.. this is the source code for it, just in case you are worried about anything malicious. I created it like two weeks ago to help someone else on StackOverflow.
Shows files from the container on my machine (the host)..
You can read more about Docker Volume configs here if you would like.
ORIGINAL ANSWER:
It looks like you are trying to share the build directory with your host machine.. After some testing, it appears Docker will overwrite the specified path on the container with the contents of the path on the host.
If you run docker logs my_laravel you should see an error about missing files at /var/www/site.. So, even though the build is successful - once Docker mounts the directory from your machine (./site) onto the container (/var/www/site) it overwrites the path within the container (/var/www/site) with the contents of the path on your host (./site) - which is empty.
To test and make sure the contents of /var/www/site are in fact being overwritten, you can run docker exec -it /bin/bash (you may need to replace /bin/bash with /bash).. This will give you command line access inside of the container. From there you can do ls -a /var/www/site..
Furthermore, you can also pre-stage ./site to have a random test file in it (test.txt or whatever), then docker-compose up -d, then run the same commands from the step above docker exec -it ... and see if the staged test.txt file is now inside the container - this gives you definitive evidence that when you run volumes, the data on your host overwrites data in the container.
With that being said, doing something like this and sharing a log directory will work... the volume path specified on the container is still overwritten, the difference is the container is writing to that path.. it doesn't rely on it for config files/app files.
Hope this helps.

Empty directory when mounting volume using windows for docker

I am attempting to mount a volume from C:/Users into a container running on a docker-machine using the hyperv driver on docker for windows (win 10 pro). I am a using the lastest docker (1.13.1) and the same on the hyper vm machine. I have tried switching to using a local account, shared the drive in the docker settings menu and ive pretty much tried everything i could find on google.
Running the test volume run command in the settings menu works for me. At this point in time I presume hyperv does not support mounting volumes from the host however i cant find anywhere that explicitly says that volumes mounting will not work in hyperv.
This is my docker-compose config:
networks: {}
services:
app:
build:
context: C:\users\deep\projects\chat\app
command: sleep 3600
image: app
links:
- rethinkdb
- redis
ports:
- 4005:4005
- 4007:4007
volumes:
- /c/users/deep/projects/chat/app:/usr/src/app:rw
redis:
image: redis
rethinkdb:
image: rethinkdb:2.3.5
version: '2.0'
volumes: {}
In my Dockerfile i can see copy files into the container to usr/src/app. When i up the services with the volume specified in the compose file the directory is emptied, however i omit this volume mount i can see my files that i copied into the container from the dockerfile.
Running verbose when starting my services i can see a volumes path specified as such 'Binds': [u'/c/users/deep/projects/chat/app:/usr/src/app:rw']. However, when i inspect the container using docker-compose inspect app i see volumes set to null "Volumes": null.
I presume at this point that mounting volumes into a container running inside a hyperv VM is not supported? Can someone confirm so that I can RIP :)
I think you just need to share the volume (c:) of the folder from the Docker app settings.
See the "Shared Drives" paragraph from the getting started guide
I'm using mounted folders with a similar configuration and it works fine once the drive has been shared.
As stupid as it seems, this happens to me often. The solution is to un-check the C drive in "Docker for windows" - > Setting - > Shared Drives, apply and check it again with apply.
You should use /c/Users, with a capital "C".

Resources