Mounted folder created as root instead of current user in Docker - docker

trying to mount a volume to my container from the docker run command.
It seems like the folder is always created as root instead of the container user. This makes it so that I'm lacking rights on the folder(cant create or write files for logging).
Doing some testing using this command:
docker run -it --entrypoint /bin/bash -v $PWD/logs:/home/jboss/myhub/logs:rw myImage:latest
If i now do the command: ls -ld /logs i get the result: drwxr-xr-x 2 root root 4096 Jun 12 13:01 logs/
Here we can see that only the owner has write-rights. And root is the owner.
I would expect(I want) jboss to be the owner of this folder. Or at least that all users have read/write rights given the :rw option in the -v parameter
What am I not understanding here? How can i get it to work like I want?

At the moment, this is a recurring issue with no simple answer.
There are two common approaches I hear of.
First involves chowning the directory before using it.
RUN mkdir -p /home/jboss/myhub/logs ; chown -R jboss:jboss /home/jboss/myhub/logs
USER jboss
In case you need to access the files from your host system with a different user, you can chmod files that your app created inside the container with your jboss user.
$ chmod -R +rw /home/jboss/myhub/logs
The second approach, involves creating the files with appropriate chmod in Dockerfile (or in your host system) before running your application.
$ touch /home/jboss/myhub/logs/app-log.txt
$ touch /home/jboss/myhub/logs/error-log.txt
$ chmod 766 /home/jboss/myhub/logs/app-log.txt
$ chmod 766 /home/jboss/myhub/logs/error-log.txt
There certainly are more ways to achieve this, but I haven't yet heard of any more "native" solutions.
I'd like to find out an easier/more practical approach.

#trust512 has identified the problem correctly and also correctly stated that there are no universally agreed upon "good solutions" to the problem. #trust512 has provided 2 kludgy solutions.
My solutions is not better - just an alternative.
Mount the parent of the volume you are interested in.
For example '/home/user' should be owned by user, but if I create a volume
docker volume create myhome
and mount it like
docker container run --mount type=volume,source=myhome,destination=/home/user ...
then /home/user will be owned by root.
However, if I do it like
docker volume create myhome &&
docker container run --mount type=volume,source=myhome,destination=/home alpine:3.4 mkdir /home/user &&
docker container run --mount type=volume,source=myhome,destination=/home alpine:3.4 chown 1000:1000 /home/user
then when I run
docker container run --mount type=volume,source=myhome,destination=/home ...
then /home/user will have the appropriate owner.

Related

Bind mounts created using rootless docker have a weird uid on the host machine. How can I delete these folders?

I have the following docker-compose.yml file which creates a bind mount located in $HOME/test on the host system:
version: '3.8'
services:
pg:
image: postgres:13
volumes:
- $HOME/test:/var/lib/postgresql/data
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=pass
- PGUSER=postgres
I bring up the container and inspect the permissions of the bind mount directory:
$ docker-compose up -d
$ ls -l ~
drwx------ 19 4688518 usertest 4096 Mar 11 17:06 test
The folder ~/test is created with a different uid in order to prevent accidental manipulation of this folder outside of the container. But what if I really do want to manipulate it? For example, if I try to delete the folder, I get a permission denied error as expected:
$ rm ~/test -rf
rm: cannot remove '/home/usertest/test': Permission denied
I suspect that I need to change uids using the newuidmap command somehow, but I'm not sure how to go about that.
How can I delete these folders?
But what if I really do want to manipulate it?
Using Docker, you can:
Run a command in the container as a specific user using the same UID (such as rm or sh), for example:
# Run shell session using your user with docker-compose
# You can then easily manipulate data
docker-compose exec -u 4688518 pg sh
# Run command directly with docker
# Docker container name may vary depending on your situation
# Use docker ps to see real container name
docker exec -it -u 4688518 stack_pg_1 rm -rf /var/lib/postgresql/data
Similar to previous one, you can run a new container with:
# Will run sh by default
docker run -it -u 4688518 -v $HOME/test:/tmp/test busybox
# You can directly delete data with
docker run -it -u 4688518 -v $HOME/test:/tmp/test busybox rm -rf /tmp/test/*
This may be suitable if your pg container is stopped or deleted. Docker image itself does not need to be the same as the one run by Docker Compose, you only need to specify proper user UID.
Note: you may not be able to delete folder using rm -rf /tmp/test as user 4688518 may not have writing permission on /tmp folder to do so, hence the use of /tmp/test/*
Use any of the above, but using root user such as -u 0 or -u root
Without using Docker, you can effectively run sudo command as suggested by other answer, or even temporarily change permission of said folder then change it back. However, from experience, when manipulating Docker-related data it's easier and less error-prone to user Docker itself.
Dealing with user ids in docker is tricky business because docker containers share the same kernel with the host operating system (at least on linux). Consequently, any files that the container creates in the bind mount with a given uid will have the same uid on the host system.
Whenever the uid used by the container (let's say it's 2222) is different from your own uid (or you don't have write access to files owned by 2222), you won't be able to delete the folder. The easy workaround is to run sudo rm -rf ~/test.
Edit: If the user does not have admin rights, you can still give them rights to modify the generated files like so.
# Create a directory that the users can write in.
mkdir workspace
# Change the owner to the group of users that should have access (3333).
sudo chown -R 2222:3333 workspace
# Give group write access.
sudo chmod -R g+w workspace
# Make sure that all users that should have write access are in group 3333.
Then you can run the container using
docker run --rm -u `id -u`:3333 -v `pwd`/workspace:/workspace \
-w /workspace alpine:latest touch myfile
which creates myfile in the workspace folder with the right permissions so your users can delete the file again.

Runnig docker as nonRoot with --user $(id -u) cant create /var/lib/

Hello im fairly new to docker and i am trying to get influxdb and grafana up and running.
I already went through some problem solving and want to get you on the same page with a little summary.
Got a docker-compose file from here
did sudo docker-compose up -d
ran into the problem, that arguments like INFLUXDB_DB=db0 insdide the docker-compose.yml are not applied by the containers. So the databaes db0 wasnt created for example.
changes to the containers though would persist. So i could create a database and after a restart it was still there
tested each container as standalone with docker run
figured out if I used bind mount instead of docker volumes it worked for influxdb
the grafana container wouldn't start
sudo docker run --volume "$PWD/data:/var/lib/grafana" -p 3000:3000 grafana/grafana:latest
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-docker-container-to-5-1-or-later
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
read here that I need do define a user with $(id -u) if I want to use bind mount with grafana
did that, but then the user has no permission to create the /var/lib/grafana directory
sudo docker run --user $(id -u) --volume "$PWD/data:/var/lib/grafana" -p 3000:3000 grafana/grafana:latest
GF_PATHS_DATA='/var/lib/grafana' is not writable.
You may have issues with file permissions, more information here: http://docs.grafana.org/installation/docker/#migration-from-a-previous-version-of-the-docker-container-to-5-1-or-later
mkdir: can't create directory '/var/lib/grafana/plugins': Permission denied
when i set the --user argument to root with 0:0 it works but i read some best practices where running as root for testing is ok but for production it would be not ideal.
i also read that i can add a user to the docker group to give the user the permissions
there was no docker group on my system so i read here, that i can create one and then adding the docker.socket to that group via /etc/docker/daemon.json but that file doesnt exist and i cant create it and i think i am pretty deep down the rabbit hole to just stop and ask if i am on the wrong path and did something wrong.
How can i start the containers as nonRoot without giving them to much permissions is my main question i think.
using:
Distributor ID: Ubuntu
Description: Ubuntu 18.04.4 LTS
Release: 18.04
Codename: bionic
I solved this by running the container as root, chown the data dir and then su to grafana user. In docker-compose, this looks like:
grafana:
image: grafana/grafana:8.2.3
volumes:
- ./data/grafana:/var/lib/grafana
user: root
entrypoint:
- /bin/sh
- -c
- |
chown grafana /var/lib/grafana
exec su grafana -s /bin/sh -c /run.sh
I remember dealing with similar issues long ago.
I recommend
one time
mkdir data/grafana
and then
sudo docker run --volume "$PWD/data:/var/lib" -p 3000:3000 grafana/grafana:latest
because as I recall whenever you volume mount something it mounts as owned by root. There is no way to change the ownership of the mounted volume.
But you can change the ownership of files and directories inside the mounted volume and they do not have to be root.
So in my solution you are just mounting data to /var/lib, /var/lib will be owned by root but /var/lib/grafana will be owned by a regular user.
This would obviously hide the contents of any other folders in the /var/lib directory but maybe there is not anything important there. If there is then could you configure the program to look for /var/lib/grafana at some other location. For example /srv/grafana then change the invocation to
sudo docker run --env GRAFANA_DIR=/srv/grafana --volume "$PWD/data:/srv" -p 3000:3000 grafana/grafana:latest
Lastly, I do not see why it would be a security issue to run as root.
That is something I am not sure I agree with.

Docker volumes : specifying permissions using mount options

I'm trying to use a named volume mounted in a Docker container, but get a Permission denied error when trying to create a file in the mounted folder. So I'm trying to use mount options when creating my volume, but that does not work as I want.
Introduction
I'm totally aware that when mounting a volume (created by docker volume create my_named_volume) with the option -v my_named_volume:/home/user/test or --mount type=volume,source=my_named_volume,target=/home/user/test), the folder inside the container (/home/user/test will be owned by root user, even if /home/user belongs to an user user created in my Dockerfile. If I run :
docker run --rm \
--name test_named_volume \
--mount type=volume,source=my_named_volume,target=/home/user/test \
test_named_volume \
su user -c "touch /home/user/test/a"
Then I get :
touch: cannot touch '/home/user/test/a': Permission denied
I'm understanding that. That's why I'm trying to use mount options when creating my volume.
mount options
I'm specifying an uid when creating my volume, in order to make my user user able to create a file in that volume :
docker volume create my_named_volume \
--opt o=uid=1000
1000 is the uid of the user user created in my Dockerfile :
FROM debian:jessie
ENV HOME /home/user
RUN useradd \
--create-home \
--home-dir $HOME \
--uid 1000 \
user \
&& chown -R user:user $HOME
WORKDIR $HOME
But when running my container (with the same command docker run defined above), I'm getting an error (missing device in volume options) :
docker: Error response from daemon: error while mounting volume '/var/lib/docker/volumes/my_named_volume/_data': missing device in volume options.
From the docs, I see that options --device and --type are missing from my volume creation :
docker volume create my_named_volume \
--opt device=??? \
--opt type=??? \
--opt o=uid=1000
But I cannot see why I must give these options. device needs to be a block device, and from what I read, type should be something like ext4. But what I want is basically just set the uid option to my volume. It looks like creating a block device should work, but it seems too much configuration for a "simple" problem.
I have tried to use tmpfs for device and type, that works fine (file /home/user/test/a is created)... until my container is stopped (the data is not persisted, and that's logical because it's tmpfs). I want to persist that data written in the volume when the container exits.
What is the simplest way to specify permissions when mounting a named volume in a container? I don't want to modify my Dockerfile to use some magic (entrypoint that chown and then execute the command for example). It seems possible using mount options, I feel like I'm close to the solution, but maybe I'm in the wrong way.
Not entirely sure what your issue is, but this worked for me:
docker run --name test_named_volume \
--mount type=volume,source=test_vol,target=/home/user \
--user user \
test_named_volume touch /home/user/a
I think where you could have gone wrong is:
Your mount target is /home/user/test has not been created yet, since the useradd command in your Dockerfile only creates $HOME (/home/user). So docker creates the directory within the container with root permissions.
You were not using the --user flag in docker run to run the container as the specified user.
Just had this issue. And contrary to popular belief the mount did NOT pickup the permissions of the host mounted directory, it reset them.
When I did this, the permissions were changed to 777 inside the container...
volumes:
- ./astro-nginx-php7/logs:/home/webowner/zos/log:rw
The :rw made all the difference for me. My image was nginx:latest.
Docker compose version 3.3

How to let dockerd create bind volume dirs with the correct permission

When I run a docker container with the following command :
docker run -ti -v /tmp/michael:/opt/jboss/wildfly/standalone/log jboss
dockerd creates the directory /tmp/michael with owner and group = root. This of course results in permission denied erros for jboss when trying to write its logfiles.
I have to create /tmp/michael manually and give chmod g+w permissions to fix that. dockerd then reuses the existing dir with the correct permissions. This is not what I want. Does anybody know how to force dockerd to create these Directories with the correct permissions
Addtional Information :
Dockerfile :
FROM jboss/wildfly
ADD entrypoint.sh /
ENTRYPOINT "/entrypoint.sh"
entrypoint.sh : (for testing purposes just a touch on the file instead of starting jboss)
#!/usr/bin/env bash
chown jboss:jboss /opt/jboss/wildfly/standalone/log
myfile=lala.`date +"%s"`
touch /opt/jboss/wildfly/standalone/log/${myfile}
But even here if /tmp/michael does not exist and does not have group +w I do receive permission denied. I have no Idea how to get rid of that
You have two possibilities:
chown somewhere in the ENTRYPOINT (making an .sh for entrypoint) (to make this possible from inside the container)
Something like chown jboss:jboss /opt/jboss/wildfly/standalone/log
Change permissions directly outside the container (in the host)
You will not have jboss user or group, you need to do directly with de id.
Look the container /etc/passwd and get the jboss user id (docker exec jboss cat /etc/passwd), write down the id and make chownin the host:
chown 1001:1001 /tmp/michael
Best way is 1, of course. You can use a docker volume for it, etc. Easiest way is 2.
In addition to Alfonso's answer, there's a third option to use a named volume to initialize the directory. You'll need to create the directory with the correct permissions inside your image first. E.g. your Dockerfile could contain the lines:
RUN mkdir -p /opt/jboss/wildfly/standalone/log \
&& chmod 775 /opt/jboss/wildfly/standalone/log
Then on your host you can create the named volume in advance:
docker volume create --driver local \
--opt type=none \
--opt device=/tmp/michael \
--opt o=bind \
jboss_logs
And finally run your container using that named volume:
docker run -ti -v jboss_logs:/opt/jboss/wildfly/standalone/log jboss
As long as /tmp/michael exists but is empty, it will be initialized with the contents of your image, including file and directory permissions, before the container is started.

how to map a local folder as the volume the docker container or image?

I am wondering if I can map the volume in the docker to another folder in my linux host. The reason why I want to do this is, if I don't misunderstand, the default mapping folder is in /var/lib/docker/... and I don't have the access to this folder. So I am thinking about changing that to a host's folder I have access (for example /tmp/) when I create the image. I'm now able to modify Dockerfile if this can be done before creating the image. Or must this be done after creating the image or after creating the container?
I found this article which helps me to use a local directory as the volume in docker.
https://docs.docker.com/engine/userguide/containers/dockervolumes/
Command I use while creating a new container:
docker run -d -P -name randomname -v /tmp/localfolder:/volumepath imageName
Docker doesn't have any tools I know of to map named or container volumes back to the host, though they are just sub directories under /var/lib/docker so writing your own tool wouldn't be impossible, but you'd need root access to run it. Note that with access to docker on the host, there are likely a lot of ways to access root privileges on the host. Creating a hard link to the target folder should be all that's needed if both source and target are on the same file system.
The docker way to access the named volume would be to create a disposable container to access your files. You can even create an additional host volume to export the data. E.g.
docker run -it --rm \
-v test:/source -v `pwd`/data:/target \
busybox /bin/sh -c "tar -cC /source . | tar -xC /target"'
Where "test" is the named volume you want to export/copy. You may need to also run a chown -R $uid /target in a container to change everything to your uid on the host.

Resources