I'm trying to mount an NFS(version 3) share, which exists on the host, into a docker container, but it's not working.
Example:
docker run -v /mnt/test001/:/mnt/test001/ hello-world
This mount is an NFS share with squashed root, which I suspect is the reason for this problem. Docker seems to run as root(that's what ps -ef shows me anyway).
Error message states:
dockerd[...]: time="..." level=error msg="Handler for POST
/.../containers/.../start returned error: error while creating mount
source path '/mnt/.../': mkdir /mnt/.../: permission denied"
What is required here to allow for this mount to occur? Removing the root squash seems unsafe and overkill. Is it possible to allow my local root to have read rights, while still squashing root?
EDIT: The permissions were dwrxwrx---
The problem was that world/other didn't have read right.
As such the subdirectory I wanted to mount could not be read by my local root, as that root got squashed -> needed world/other rights and there were none.
Related
I'm following https://www.redhat.com/sysadmin/podman-inside-container, targeting rootless Podman running a rootless Podman container. podman info works, but shows the warning message
WARN[0000] "/" is not a shared mount, this could cause issues or
missing mounts with rootless containers
mount --make-rshared /, run as root, solves that. My question is, how I can prepare my image so that I don't need root privileges everytime I run the image.
I tried RUN mount --make-rshared / in the Dockerfile. That failed with "Permission denied", which confuses me because permisison is not denied for everything else, like yum, useradd and file access.
I also tried to put the command in an init script, saved it in /etc/init.d and added it with chkconfig. It doesn't seem to be executed when I run the image, although I added all runlevels: 123456 . I can execute it directly, but of course only as root.
I'm rather new to Docker/Podman. There are some other question marks for me, like the missing shell commands login and runlevel. The important goal is to get rid of the warning message though.
when running docker run -v /sys:/sys:rw $NAME $COMMAND , the container tries to write to a file in /sys, but this causes a "Permission denied" error.
The container user is root. In host, root has write permission to the files.
this issue is fixed with --privileged, but that option has serious security drawbacks.
how can a container have write access to sysfs without using --privileged?
I tried adding all the security capabilities with --cap-add, but it wasn't helpful.
Docker runs /sys filesystem in read-only mode. In the docker forum was said that another container runtime named Sysbox can help, but I didn't try it.
https://forums.docker.com/t/unable-to-mount-sys-inside-the-container-with-rw-access-in-unprivileged-mode/97043
So I have this remote folder /mnt/shared mounted with fuse. It is mostly available, except there shall be some disconnections from time to time.
The actual mounted folder /mnt/shared becomes available again when the re-connection happens.
The issue is that I put this folder into a docker volume to make it available to my app: /shared. When I start the container, the volume is available.
But if a disconnection happens in between, while the /mnt/shared repo on the host machine is available, the /shared folder is not accessible from the container, and I get:
user#machine:~$ docker exec -it e313ec554814 bash
root#e313ec554814:/app# ls /shared
ls: cannot access '/shared': Transport endpoint is not connected
In order to get it to work again, the only solution I found is to docker restart e313ec554814, which brings downtime to my app, hence is not an acceptable solution.
So my questions are:
Is this somehow a docker "bug" not to reconnect to the mounted folder when it is available again?
Can I execute this task manually, without having to restart the whole container?
Thanks
I would try the following solution.
If you mount the volume to your docker like so:
docker run -v /mnt/shared:/shared my-image
I would create an intermediate directory /mnt/base/shared and mount it to docker like so:
docker run -v /mnt/base/shared:/base/shared my-image
and I will also adjust my code to refer to the new path or creating a link from /base/shared to /shared inside the container
Explanation:
The problem is that the mounted directory /mnt/shared is probably deleted on host machine, when there is a disconnection and a new directory is created after connection is back. But, the container started running with directory mapping for the old directory which was deleted. By creating an intermediate directory and mapping to it instead you avoid this mapping issue.
Another solution that might work is to mount the directory using bind-propagation=shared
e.g:
--mount type=bind,source=/mnt/shared,target=/shared,bind-propagation=shared
See docker docs explaining bind-propogation
I have Docker on Centos7 with selinux set to enforcing on the host and Docker daemon is started with --selinux-enabled flag.
When I try to run the following command
docker run -it -v /usr/local/xya/log:/usr/local/xya/log:z centos/systemd touch /usr/local/xya/log/test
I get the following error:
docker: Error response from daemon: error setting label on mount source '/usr/local/xya/log': relabeling content in /usr is not allowed.
As per some articles (http://jaormx.github.io/2018/selinux-and-docker-notes/), the 'z' flag is supposed to make /usr writable; not sure if I am missing something.
Docker version 19.03.3, build a872fc2f86
CentOS version: CentOS Linux release 7.5.1804
I recently had a similar (albeit different issue), I found Juan's SELinux and docker notes helpful.
I'm having troubles finding the documentation that highlighted the following point, but I recall seeing it and was able to get around my issues by accepting it as truth. I will update it if/when I stumble across it again; Not everything within /usr or /etc will grant you write access in SELinux. At least not in the context of Docker.
You can access the /etc and /usr directories within SELinux context, but you cannot obtain write everywhere, so z and Z will occasionally give you unable to label issues when spinning up docker containers with volume mounts from those locations. However, if you have SELinux protected files elsewhere, e.g. in a users home directory, you'd be able to have Docker relabel those files appropriately -- that is you'd be able to write to those SELinux protected files/directories with the z or Z flags.
If you need to write within the /usr or /etc directories and obtaining the unable to relabel alert, the --privileged flag or --security-opt label:disable flag should be instead of the z syntax. This would allow you to have write access, but you'd need to remove the z from your volume mount as Docker would still give you the unable to relabel statement.
note, you can also invoke privileged in the docker-compose.yml via privileged: true for a given service
the image has no permission to edit or create new files in usr folder, from the Docs you may start the container with --privileged parameter
Disclaimer/Edit 2
Some years later, for everyone reading this question - If you are on Windows and want to use docker with linux containers, I highly recommend not using docker for windows at all and instead starting the entire docker environment inside a VM altogether. This Ext3 NTFS issue will break your neck on so many different levels that installing docker-machine might not even be worth the effort.
Edit:
I am using docker-machine which starts a boot2docker instance inside a Virtualbox VM with a shared folder on /c/Users from which you can mount volumes into your containers. The permissions of said volumes are the ones the question is about. The VMs are stored under /c/Users/tom/.docker/
I chose to use the docker-machine Virtualbox workflow over Hyper-V because I need VBox in my daily workflow and running Hyper-V and Virtualbox together on one system is not possible due to incompabilities between different Hypervisors.
Original question
I am currently trying to set up PHPMyAdmin in a container on windows but I can't change the permissions of the config.inc.php file.
I found: Cannot call chown inside Docker container (Docker for Windows) and thought this might be somewhat related but it appears to apply only to MongoDB.
This is my docker-compose.yml
version: "3"
services:
pma:
image: (secrect company registry)/phpmyadmin
ports:
- 9090:80
volumes:
- /c/Users/tom/projects/myproject/data/var/www/public/config.inc.php:/var/www/public/config.inc.php
now, when I docker exec -it [container] bash and change in the mounted directory, I try to run chmod on the config.inc.php but for some reason, it fails silently.
root#22a4bag43245: ls -la config.inc.php
-rw------- 1 root root 0 Aug 11 15:11 config.inc.php
root#22a4bag43245: chmod 655 config.inc.php
root#22a4bag43245: ls -la config.inc.php
-rw------- 1 root root 0 Aug 11 15:11 config.inc.php
Considering the linked answer, I thought I could just move the volume out of my Userhome but then vbox doesn't mount the folder at all.
How do I change the file permissions of /var/www/public/config.inc.php persistently?
I had the same problem of not being able to change ownership even after using chown. And as I researched, it was because of NTFS volumes being mounted inside ext filesystem. So I used another approach.
The volumes internal to docker are free from these problems. So you can mount your file on internal docker volume and then create a hard symlink to that file inside your local folder wherever you want:
sudo ln $(docker volume inspect --format '{{ .Mountpoint }}' <project_name>_<volume_name>) <absolute_path_of_destination>
This way you can have your files in desired place, inside docker and without any permission issues, and you will be able to modify the contents of file as in the normal volume mount due to hard symlink.
Here is a working implementation of this process which mounts and links a directory. In case you wanna know about the details, see possible fix section in issue.
EDIT
Steps to implement this approach:
Mount the concerned file in internal docker-volume(also known as named volumes).
Before making hardlink, make sure volumes and concerned file are present there. To ensure this, you should have run your container at least once before or if you want to automate this file creation, you can include a docker run which creates the required files and exits.
docker run --rm -itd \
-v "<Project_name>_<volume_name>:/absolute/path" \
<image> bash -c "touch /absolute/path/<my_file>"
This docker run will create volumes and required files. Here, container is my project name, by default, it is the name of the folder in which project is present and <volume_name> is the same as one which we want to use in our original container. <image> can be the same one which is already being used in your original containers.
Create a hardlink in your OS to the actual file location on your system. You can find the file location using docker volume inspect --format '{{ .Mountpoint }}' <project_name>_<volume_name>/<my_file>. Linux users can use ln in terminal and windows users can use mklink in command prompt.
In step 3 we have not used /absolute/path since the <volume_name> refers to that location already, and we just need to refer to the file.
Try one of the following:
If you can rebuild the image image: image: (secrect company registry)/docker-stretchimal-apache2-php7-pma then inside the docker file, add the following
USER root
RUN chmod 655 config.inc.php
Then you can rebuild the image and push it to the registry, and what you were doing should work. This should be your preferred solution, as you don't want to be manually changing the permissions everytime you start a new container
Try to exec using the user root explicitly
docker exec -it -u root [container] bash