I'm trying to use a named volume mounted in a Docker container, but get a Permission denied error when trying to create a file in the mounted folder. So I'm trying to use mount options when creating my volume, but that does not work as I want.
Introduction
I'm totally aware that when mounting a volume (created by docker volume create my_named_volume) with the option -v my_named_volume:/home/user/test or --mount type=volume,source=my_named_volume,target=/home/user/test), the folder inside the container (/home/user/test will be owned by root user, even if /home/user belongs to an user user created in my Dockerfile. If I run :
docker run --rm \
--name test_named_volume \
--mount type=volume,source=my_named_volume,target=/home/user/test \
test_named_volume \
su user -c "touch /home/user/test/a"
Then I get :
touch: cannot touch '/home/user/test/a': Permission denied
I'm understanding that. That's why I'm trying to use mount options when creating my volume.
mount options
I'm specifying an uid when creating my volume, in order to make my user user able to create a file in that volume :
docker volume create my_named_volume \
--opt o=uid=1000
1000 is the uid of the user user created in my Dockerfile :
FROM debian:jessie
ENV HOME /home/user
RUN useradd \
--create-home \
--home-dir $HOME \
--uid 1000 \
user \
&& chown -R user:user $HOME
WORKDIR $HOME
But when running my container (with the same command docker run defined above), I'm getting an error (missing device in volume options) :
docker: Error response from daemon: error while mounting volume '/var/lib/docker/volumes/my_named_volume/_data': missing device in volume options.
From the docs, I see that options --device and --type are missing from my volume creation :
docker volume create my_named_volume \
--opt device=??? \
--opt type=??? \
--opt o=uid=1000
But I cannot see why I must give these options. device needs to be a block device, and from what I read, type should be something like ext4. But what I want is basically just set the uid option to my volume. It looks like creating a block device should work, but it seems too much configuration for a "simple" problem.
I have tried to use tmpfs for device and type, that works fine (file /home/user/test/a is created)... until my container is stopped (the data is not persisted, and that's logical because it's tmpfs). I want to persist that data written in the volume when the container exits.
What is the simplest way to specify permissions when mounting a named volume in a container? I don't want to modify my Dockerfile to use some magic (entrypoint that chown and then execute the command for example). It seems possible using mount options, I feel like I'm close to the solution, but maybe I'm in the wrong way.
Not entirely sure what your issue is, but this worked for me:
docker run --name test_named_volume \
--mount type=volume,source=test_vol,target=/home/user \
--user user \
test_named_volume touch /home/user/a
I think where you could have gone wrong is:
Your mount target is /home/user/test has not been created yet, since the useradd command in your Dockerfile only creates $HOME (/home/user). So docker creates the directory within the container with root permissions.
You were not using the --user flag in docker run to run the container as the specified user.
Just had this issue. And contrary to popular belief the mount did NOT pickup the permissions of the host mounted directory, it reset them.
When I did this, the permissions were changed to 777 inside the container...
volumes:
- ./astro-nginx-php7/logs:/home/webowner/zos/log:rw
The :rw made all the difference for me. My image was nginx:latest.
Docker compose version 3.3
Related
What is the best practice for handling uid/gid and permissions with jupyter notebooks in docker?
When one of the jupyter+python Dockerfiles in jupyter/docker-stack is run, a notebook gets saved with uid/gid 1000:100. This will fail if a mounted host folder is not writable by "other", which is an ugly approach.
The notebook image can be run specifying the NB_UID and NB_GID, like this:
docker run -p 8888:8888 -it --rm \
-e NB_UID=$(id -u) \
-e NB_GID=$(id -g) \
-e GRANT_SUDO=yes \
--user root \
--mount type=bind,source="$(pwd)",target=/home/jovyan/work \
myimage
In this case, the uid/gid of joyvan in the container match my uid/gid, so there is no permissions problem writing to a mounted folder. However, now jovyan (the container user) cannot access /opt/conda, which is owned by 1000:100 and is not readable by other. So all the add-on packages cannot be loaded!
We could also run docker build with --build-arg myuid=$(id -u) --build-arg mygid=$(id -g)
I believe this would result in both /home/jovyan and /opt/conda being owned by the same uid:gid as me, everything good. However, the resulting image can be used only by me. If I give it to my collaborators (who has a different UID), it will not work.
So it seems that every possibility is blocked or a poor choice. File permissions in docker are difficult.
Can anyone share the best approach for this problem?
The best practise with Jupyter Notebook is to use your own user id and group id so the new files you create will have correct ownership. Then use --group-add users to add yourself to users group to get access to the required folders (e.g. /opt/conda).
The full command would be:
docker run -it --rm --user $(id -u):$(id -g) --group-add users -v "$(pwd)":/home/jovyan -p 8888:8888 jupyter/scipy-notebook
I encountered the same problem and found a good solution which is referred from here.
COPY --chown=1000:100 hostfolder/* /home/$NB_USER/work/
Note that environment or argument expansion in command options is not implemented yet, thus following line would cause build error failed to build: unable to convert uid/gid chown string to host mapping: can't find uid for user $NB_UID: no such user: $NB_UID
# COPY --chown=$NB_USER:$NB_GID hostfolder/* /home/$NB_USER/work/
Therefore, need to hard code the user(jovyan) and group name(users) or id(1000:100).
trying to mount a volume to my container from the docker run command.
It seems like the folder is always created as root instead of the container user. This makes it so that I'm lacking rights on the folder(cant create or write files for logging).
Doing some testing using this command:
docker run -it --entrypoint /bin/bash -v $PWD/logs:/home/jboss/myhub/logs:rw myImage:latest
If i now do the command: ls -ld /logs i get the result: drwxr-xr-x 2 root root 4096 Jun 12 13:01 logs/
Here we can see that only the owner has write-rights. And root is the owner.
I would expect(I want) jboss to be the owner of this folder. Or at least that all users have read/write rights given the :rw option in the -v parameter
What am I not understanding here? How can i get it to work like I want?
At the moment, this is a recurring issue with no simple answer.
There are two common approaches I hear of.
First involves chowning the directory before using it.
RUN mkdir -p /home/jboss/myhub/logs ; chown -R jboss:jboss /home/jboss/myhub/logs
USER jboss
In case you need to access the files from your host system with a different user, you can chmod files that your app created inside the container with your jboss user.
$ chmod -R +rw /home/jboss/myhub/logs
The second approach, involves creating the files with appropriate chmod in Dockerfile (or in your host system) before running your application.
$ touch /home/jboss/myhub/logs/app-log.txt
$ touch /home/jboss/myhub/logs/error-log.txt
$ chmod 766 /home/jboss/myhub/logs/app-log.txt
$ chmod 766 /home/jboss/myhub/logs/error-log.txt
There certainly are more ways to achieve this, but I haven't yet heard of any more "native" solutions.
I'd like to find out an easier/more practical approach.
#trust512 has identified the problem correctly and also correctly stated that there are no universally agreed upon "good solutions" to the problem. #trust512 has provided 2 kludgy solutions.
My solutions is not better - just an alternative.
Mount the parent of the volume you are interested in.
For example '/home/user' should be owned by user, but if I create a volume
docker volume create myhome
and mount it like
docker container run --mount type=volume,source=myhome,destination=/home/user ...
then /home/user will be owned by root.
However, if I do it like
docker volume create myhome &&
docker container run --mount type=volume,source=myhome,destination=/home alpine:3.4 mkdir /home/user &&
docker container run --mount type=volume,source=myhome,destination=/home alpine:3.4 chown 1000:1000 /home/user
then when I run
docker container run --mount type=volume,source=myhome,destination=/home ...
then /home/user will have the appropriate owner.
I'm in the process of setting up dovecot as docker container. I want to store the Maildir via NFS on a NAS.
I'm creating the docker volume like this:
docker volume create \
--driver local \
--opt type=nfs \
--opt o=addr=<ip>,rw \
--opt device=:/vmail \
vmail
in the Dockerfile, I have:
RUN useradd -m -p vmail -s /bin/false vmail
VOLUME /home/vmail
and to run the docker container, I call:
docker run \
-dit \
-p 993:993 \
--mount source=vmail,target=/home/vmail \
my_dovecot
but as a result I get:
docker: Error response from daemon: chown /var/lib/docker/volumes/vmail/_data: operation not permitted.
The issue is clearly related to the way I mount the NFS volume, as - if I drop the --mount statement - it works ok (but obviously can't access my Maildir data from the NAS).
I'm pretty sure that this is related to the fact that dovecot is trying to access the Maildir as vmail user, and that user doesn't have permissions on the NFS share - but even giving everybody write access on the NFS share doesn't make a difference.
I'm looking for any advice to get this NFS volume properly mounted into my docker container.
Regards
StHeine
in the meantime I found the issue.
to fix this, I had to remove the -m in the useradd command to prevent it from creating the /home/vmail directory:
RUN useradd -p vmail -s /bin/false vmail
VOLUME /home/vmail
because if that exists, mounting the volume into that same place, docker tries to copy existing folder data into the volume and chown this to the volume's ownership. due to the fact that the volume comes via NFS from a NAS is doesn't have proper uids, but nobody - and chown fails.
I found references to nocopy to prevent docker from doing this, but I haven't figured how to set that in the docker create statement.
When I run a docker container with the following command :
docker run -ti -v /tmp/michael:/opt/jboss/wildfly/standalone/log jboss
dockerd creates the directory /tmp/michael with owner and group = root. This of course results in permission denied erros for jboss when trying to write its logfiles.
I have to create /tmp/michael manually and give chmod g+w permissions to fix that. dockerd then reuses the existing dir with the correct permissions. This is not what I want. Does anybody know how to force dockerd to create these Directories with the correct permissions
Addtional Information :
Dockerfile :
FROM jboss/wildfly
ADD entrypoint.sh /
ENTRYPOINT "/entrypoint.sh"
entrypoint.sh : (for testing purposes just a touch on the file instead of starting jboss)
#!/usr/bin/env bash
chown jboss:jboss /opt/jboss/wildfly/standalone/log
myfile=lala.`date +"%s"`
touch /opt/jboss/wildfly/standalone/log/${myfile}
But even here if /tmp/michael does not exist and does not have group +w I do receive permission denied. I have no Idea how to get rid of that
You have two possibilities:
chown somewhere in the ENTRYPOINT (making an .sh for entrypoint) (to make this possible from inside the container)
Something like chown jboss:jboss /opt/jboss/wildfly/standalone/log
Change permissions directly outside the container (in the host)
You will not have jboss user or group, you need to do directly with de id.
Look the container /etc/passwd and get the jboss user id (docker exec jboss cat /etc/passwd), write down the id and make chownin the host:
chown 1001:1001 /tmp/michael
Best way is 1, of course. You can use a docker volume for it, etc. Easiest way is 2.
In addition to Alfonso's answer, there's a third option to use a named volume to initialize the directory. You'll need to create the directory with the correct permissions inside your image first. E.g. your Dockerfile could contain the lines:
RUN mkdir -p /opt/jboss/wildfly/standalone/log \
&& chmod 775 /opt/jboss/wildfly/standalone/log
Then on your host you can create the named volume in advance:
docker volume create --driver local \
--opt type=none \
--opt device=/tmp/michael \
--opt o=bind \
jboss_logs
And finally run your container using that named volume:
docker run -ti -v jboss_logs:/opt/jboss/wildfly/standalone/log jboss
As long as /tmp/michael exists but is empty, it will be initialized with the contents of your image, including file and directory permissions, before the container is started.
I am wondering if I can map the volume in the docker to another folder in my linux host. The reason why I want to do this is, if I don't misunderstand, the default mapping folder is in /var/lib/docker/... and I don't have the access to this folder. So I am thinking about changing that to a host's folder I have access (for example /tmp/) when I create the image. I'm now able to modify Dockerfile if this can be done before creating the image. Or must this be done after creating the image or after creating the container?
I found this article which helps me to use a local directory as the volume in docker.
https://docs.docker.com/engine/userguide/containers/dockervolumes/
Command I use while creating a new container:
docker run -d -P -name randomname -v /tmp/localfolder:/volumepath imageName
Docker doesn't have any tools I know of to map named or container volumes back to the host, though they are just sub directories under /var/lib/docker so writing your own tool wouldn't be impossible, but you'd need root access to run it. Note that with access to docker on the host, there are likely a lot of ways to access root privileges on the host. Creating a hard link to the target folder should be all that's needed if both source and target are on the same file system.
The docker way to access the named volume would be to create a disposable container to access your files. You can even create an additional host volume to export the data. E.g.
docker run -it --rm \
-v test:/source -v `pwd`/data:/target \
busybox /bin/sh -c "tar -cC /source . | tar -xC /target"'
Where "test" is the named volume you want to export/copy. You may need to also run a chown -R $uid /target in a container to change everything to your uid on the host.