Industrial Raspberry PI with portainer.io: file permission issues, app dumped - docker

I've built an image (docker buildx) from an ubuntu 18, stored to tar, uploaded on Portainer (running on RPI)
Using Portainer, I create a stack with yaml file, and I get a "deployment error", but the container is created
Below, the composer file content:
version: '2'
services:
sda:
image: <out image>
network_mode: "host"
container_name: "sda4"
volumes:
- virtual_sda4:/opt/<company>/<application>
stdin_open: true
Running the container our app is dumped.
Connecting with console as root user, listing files has a strange result:
all files have "undefined" permissions (question marks instead of rwx)
It seems a permission issue.
(*) stdin_open: true: used to keep alive the container, because the app is dumped.
Any idea is appreciated
Thanks
Lorenzo

Just an update on this topic. The problem was linked to different docker version used for building and deploying the images. The builder was 20., the version on RPI was 18.. Updating the version on RPI, provided by the board integratori, solved the issue.
Hoping this is useful to others.
Lorenzo

Related

Docker container R/W permissions to access remote TrueNAS SMB share

I've been banging my head against the wall trying to sort out permissions issues when running a container that uses a remote SMB share for storing configuration files.
I found this post and answer but still can't seem to get things to work:
docker-add-network-drive-as-volume-on-windows
For the below YAML code, yes everything is formatted correctly. I just copied this from my reddit post and the indents are not showing correctly now.
My set-up is as follows:
Running Proxmox as my hypervisor with:
TrueNAS Scale as the NAS
Debian VM for hosting Docker
The TrueNAS VM has a single pool, with 1 dataset for SMB shares and 1 dataset for NFS shares (implemented for troubleshooting purposes)
I have credentials steve:steve (1000:1000) supersecurepassword with Full Control ACL permissions on the SMB share. I can access this share via windows and CLI and have all expected operations behaving as expected.
On the Debian host, I have created user steve:steve (1000:1000) with supersecurepassword.
I have been able to successfully mount and map the share within the debian host using cifs using:
//192.168.10.206/dockerdata /mnt/dockershare cifsuid=1000,gid=1000,vers=3.0,credentials=/root/.truenascreds 0 0
The credentials are:
username=steve
password=supersecurepassword
I can read/write from CLI through the mount point, view files, modify files, etc.
I have also successfully mounted & read/write the share with these additional options:
file_mode=0777,dir_mode=0777,noexec,nosuid,nosetuids,nodev
Now here's where I start having problems. I can create a container user docker compose, portainer (manual creation and stack for compose) but run into database errors as the container attempts to start.
version: "2.1"
services:
babybuddytestsmbmount:
image: lscr.io/linuxserver/babybuddy:latest
container_name: babybuddytestsmbmount
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- CSRF_TRUSTED_ORIGINS=http://127.0.0.1:8000,https://babybuddy.domain.com
ports:
- 1801:8000
restart: unless-stopped
volumes:
- /mnt/dockershare/babybuddy2:/config
Docker will create all folders and files, start the container but the webui will return a server 500 error. The logs indicate these database errors which results in a large number of exceptions:
sqlite3.OperationalError: database is locked
django.db.utils.OperationalError: database is locked
django.db.migrations.exceptions.MigrationSchemaMissing: Unable to create the django_migrations table (database is locked)
I also tried mounting the SMB share in a docker volume using the following:
version: "2.1"
services:
babybuddy:
image: lscr.io/linuxserver/babybuddy:latest
container_name: babybuddy
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- CSRF_TRUSTED_ORIGINS=http://127.0.0.1:8000,https://babybuddy.domain.com
ports:
- 1800:8000
restart: unless-stopped
volumes:
- dockerdata:/config
volumes:
dockerdata:
driver_opts:
type: "cifs"
o: "username=steve,password=supersecurepassword,uid=1000,gid=1000,file_mode=0777,dir_mode=0777,noexec,nosuid,nosetuids,nodev,vers=3.0"
device: "//192.168.10.206/dockerdata"
I have also tried this under options:
o: "username=steve,password=supersecurepassword,uid=1000,gid=1000,rw,vers=3.0"
Docker again is able to create the container, create & mount the volume, create all folders and files, but encouters the same DB errors indicated above.
I believe this is because the container is trying to access the SMB share as root, which TrueNAS does not permit. I have verified that all files and folders are under the correct ownership, and during troubleshooting have also stopped the container, recursively chown and chgrp the dataset to root:root, restarting the container and no dice. Changing the SMB credntials on the debian host to root results in a failure to connect.
Testing to ensure I didn't have a different issue causing problems, I was able to sucessfully start the container locally on the host as well as using a remote NFS share from the same TrueNAS VM.
I have also played with the dataset permissions, changing owners within TrueNAS, attempting permissions without ACL, etc.
Each of these variations was done with fresh dataset for SMB and a wipeout and recreation of docker as well as reinstall of debian.
Any help or suggestions would be greatly appreciated.
Edit: I also tried this with Ubuntu as the docker host and attempted to have docker run under the steve user to disastrous results.
I expected to be able to mount the SMB share on my TrueNAS system on my Debian docker host machine and encounter write errors in the database files that are part of the container. Local docker instances and NFS mounts work fine.

Is there a better way to avoid folder permission issues for docker containers launched from docker compose in manjaro?

Is there better way to avoid folder permission issues when a relative folder is being set in a docker compose file when using manjaro?
For instance, take the bitnami/elasticsearch:7.7.0 image as an example:
This image as an example will always throw the ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/bitnami/elasticsearch/data/nodes]; error.
I can get around in by:
create the data directory with sudo, followed by chmod 777
attaching a docker volume
But I am looking for a bit easier to manage solution, similar to the docker experience in Ubuntu and OSX which I do not have to first create a directory with root in order for folder mapping to work.
I have made sure that my user is in the docker group by following the post install instructions on docker docs. I have no permission issues when accessing docker info, or sock.
docker-compose.yml
version: '3.7'
services:
elasticsearch:
image: bitnami/elasticsearch:7.7.0
container_name: elasticsearch
ports:
- 9200:9200
networks:
- proxy
environment:
- ELASTICSEARCH_HEAP_SIZE=512m
volumes:
- ./data/:/bitnami/elasticsearch/data
- ./config/elasticsearch.yml:/opt/bitnami/elasticsearch/config/elasticsearch.yml
networks:
proxy:
external: true
I am hoping for a more seamless experience when using my compose files from git which works fine in other systems, but running into this permission issue on the data folder on manjaro.
I did check other posts on SO, some some are temporary, like disabling selinux, while other require running docker with the --privileged flag, but I am trying to do with from compose.
This has nothing to do with the Linux distribution but is a general problem with Docker and bind mounts. A bind mount is when you mount a directory of your host into a container. The problem is that the Docker daemon creates the directory under the user it runs with (root) and the UID/GIDs are mapped literally into the container.
Not that it is advisable to run as root, but depending on your requirements, the official Elasticsearch image (elasticsearch:7.7.0) runs as root and does not have this problem.
Another solution that would work for the bitnami image is to make the ./data directory owned by group root and group writable, since it appears the group of the Elasticsearch process is still root.
A third solution is to change the GID of the bitnami image to whatever group you had the data created with and make it group writable.

Files in Docker volumes not refeshing on Windows on file change

I had Docker for Windows, switched to Docker toolbox and now back to Docker for Windows and I ran into the issues with Volumes.
Before volumes were working perfectly fine and my containers which running with nodemon/tsnode/CLI watching files were restarting properly on source code change, but now they don't at all, so it looks like file changes from host are not populated in the container.
This is docker-compose for one service:
api:
build:
context: ./api
dockerfile: Dockerfile-dev
volumes:
- ./api:/srv
working_dir: /srv
links:
- mongo
depends_on:
- mongo
ports:
- 3030:3030
environment:
MONGODB: mongodb://mongo:27017/api_test
labels:
- traefik.enable=true
- traefik.frontend.rule=Host:api.mydomain.localhost
This id Dockerfile-dev
FROM node:10-alpine
ENV NODE_ENV development
WORKDIR /srv
EXPOSE 3030
CMD yarn dev // simply nodemon, working when ran from host
Can anyone help with that?
C drive is shared and verified with docker run --rm -v c:/Users:/data alpine ls /data showing list of files properly.
I will really appreciate any help.
We experienced the exact same problems in our team while developing nodejs/typescript applications with Docker on top of Windows and it has always been a big pain. To be honest, though, Windows does the right thing by not propagating the change event to the containers (Linux hosts also do not propagate the fsnotify events to containers unless the change is made from within the container). So bottom line: I do not think this issue will be avoidable unless you actually change the files within the container instead of changing them on the docker host. You can achieve this with a code sync tool like docker-sync, see this page for a list of available options: https://github.com/EugenMayer/docker-sync/wiki/Alternatives-to-docker-sync
Because we struggled with such issues for a long time, a colleague and I started an open source project called DevSpace CLI: https://github.com/covexo/devspace
The DevSpace CLI can establish a reliable and super fast 2-way code sync between your local folders and folders within your dev containers (works with any Kubernetes cluster, any volume and even with ephemeral / non-persistent folders) and it is designed to work perfectly with hot reloading tools such as nodemon. Setup minikube or a cluster with a one-click installer on some public cloud, run devspace up inside your project and you will be ready to program within your DevSpace without ever having to worry about local Docker issues and hot reloading problems. Let me know if it works for you or if there is anything you are missing.
I've been stuck into this recently (Feb 2020, Docker Desktop 2.2) and nothing from the base solutions really helped.
However when I tried WSL 2 and ran my docker-compose from inside Ubuntu shell, it became to pick up the changes in the files instantly. So if someone is observing this - try to up Docker from WSL 2.

Docker-compose top level volume unable to find path

I have a pretty simple docker-compose setup which is working on my colleague computer (*), but for some obscure reason it doesn't work on mine.
Here is my docker-compose.yml
version: '3.3'
services:
... there are other services that are starting successfully ...
reporting:
image: microsoft/dotnet:2.0-runtime
hostname: reporting
container_name: reporting
volumes:
- publish-output:/app
command: dotnet /app/MocksGenerator.dll -s ${MSNAME_R} -p ${MSPORT_R} -c http://${CHOST} -m http://${MBHOST}${MSNAME_R}:${MBPORT}
networks:
- consul
links:
- mbreporting
- consul
- fabio
depends_on:
- mbreporting
- consul
- fabio
networks:
consul:
volumes:
publish-output:
driver: local
driver_opts:
device: /mnt/d/Repositories/microservices.mocking/docker/PublishOutput
o: bind
What I recieve as error from docker-compose when I try to start it using "docker-compose up".
ERROR: for reporting Cannot start service reporting: error while mounting volume '/var/lib/docker/volumes/betsreporting_publish-output/_data': error while mounting volume with options: type='' device='/mnt/d/Repositories/microservices.mocking/docker/PublishOutput' o='bind': no such file or directory
Running ls -la /mnt/d/Repositories/microservices.mocking/docker yields
drwxrwxrwx 0 root root 4096 May 30 16:12 PublishOutput
So host directory exists for sure, but docker-compose can't seem to find it for some reason. Why?
(*) My colleague is using volume of type bind, I tried with that, also didn't work for the same reason so I've decided to change the volume type, but then it seems like the root problem is that docker-compose can't seem to find the host directory.
After reset of Docker daemon credentials sharing for device window prompted and then after re-sharing the disk it started working again, even tho it was previously shared as well. I suspect that sharing of disk to Docker does not apply to directories created AFTER sharing was done (thus re-sharing was needed) but I am not entirely sure, will check that with docker engine guys.
One more thing, I was trying also to run it from Linux subsystem on Windows and it didn't work, I suspect that again permissions of Linux subsystem and Windows might be not matching or docker engine might have a bug, cause even after re-sharing error persisted so I had to run it from Powershell instead.

Empty directory when mounting volume using windows for docker

I am attempting to mount a volume from C:/Users into a container running on a docker-machine using the hyperv driver on docker for windows (win 10 pro). I am a using the lastest docker (1.13.1) and the same on the hyper vm machine. I have tried switching to using a local account, shared the drive in the docker settings menu and ive pretty much tried everything i could find on google.
Running the test volume run command in the settings menu works for me. At this point in time I presume hyperv does not support mounting volumes from the host however i cant find anywhere that explicitly says that volumes mounting will not work in hyperv.
This is my docker-compose config:
networks: {}
services:
app:
build:
context: C:\users\deep\projects\chat\app
command: sleep 3600
image: app
links:
- rethinkdb
- redis
ports:
- 4005:4005
- 4007:4007
volumes:
- /c/users/deep/projects/chat/app:/usr/src/app:rw
redis:
image: redis
rethinkdb:
image: rethinkdb:2.3.5
version: '2.0'
volumes: {}
In my Dockerfile i can see copy files into the container to usr/src/app. When i up the services with the volume specified in the compose file the directory is emptied, however i omit this volume mount i can see my files that i copied into the container from the dockerfile.
Running verbose when starting my services i can see a volumes path specified as such 'Binds': [u'/c/users/deep/projects/chat/app:/usr/src/app:rw']. However, when i inspect the container using docker-compose inspect app i see volumes set to null "Volumes": null.
I presume at this point that mounting volumes into a container running inside a hyperv VM is not supported? Can someone confirm so that I can RIP :)
I think you just need to share the volume (c:) of the folder from the Docker app settings.
See the "Shared Drives" paragraph from the getting started guide
I'm using mounted folders with a similar configuration and it works fine once the drive has been shared.
As stupid as it seems, this happens to me often. The solution is to un-check the C drive in "Docker for windows" - > Setting - > Shared Drives, apply and check it again with apply.
You should use /c/Users, with a capital "C".

Resources