I have Docker on Centos7 with selinux set to enforcing on the host and Docker daemon is started with --selinux-enabled flag.
When I try to run the following command
docker run -it -v /usr/local/xya/log:/usr/local/xya/log:z centos/systemd touch /usr/local/xya/log/test
I get the following error:
docker: Error response from daemon: error setting label on mount source '/usr/local/xya/log': relabeling content in /usr is not allowed.
As per some articles (http://jaormx.github.io/2018/selinux-and-docker-notes/), the 'z' flag is supposed to make /usr writable; not sure if I am missing something.
Docker version 19.03.3, build a872fc2f86
CentOS version: CentOS Linux release 7.5.1804
I recently had a similar (albeit different issue), I found Juan's SELinux and docker notes helpful.
I'm having troubles finding the documentation that highlighted the following point, but I recall seeing it and was able to get around my issues by accepting it as truth. I will update it if/when I stumble across it again; Not everything within /usr or /etc will grant you write access in SELinux. At least not in the context of Docker.
You can access the /etc and /usr directories within SELinux context, but you cannot obtain write everywhere, so z and Z will occasionally give you unable to label issues when spinning up docker containers with volume mounts from those locations. However, if you have SELinux protected files elsewhere, e.g. in a users home directory, you'd be able to have Docker relabel those files appropriately -- that is you'd be able to write to those SELinux protected files/directories with the z or Z flags.
If you need to write within the /usr or /etc directories and obtaining the unable to relabel alert, the --privileged flag or --security-opt label:disable flag should be instead of the z syntax. This would allow you to have write access, but you'd need to remove the z from your volume mount as Docker would still give you the unable to relabel statement.
note, you can also invoke privileged in the docker-compose.yml via privileged: true for a given service
the image has no permission to edit or create new files in usr folder, from the Docs you may start the container with --privileged parameter
Related
when running docker run -v /sys:/sys:rw $NAME $COMMAND , the container tries to write to a file in /sys, but this causes a "Permission denied" error.
The container user is root. In host, root has write permission to the files.
this issue is fixed with --privileged, but that option has serious security drawbacks.
how can a container have write access to sysfs without using --privileged?
I tried adding all the security capabilities with --cap-add, but it wasn't helpful.
Docker runs /sys filesystem in read-only mode. In the docker forum was said that another container runtime named Sysbox can help, but I didn't try it.
https://forums.docker.com/t/unable-to-mount-sys-inside-the-container-with-rw-access-in-unprivileged-mode/97043
After freshly installing Ubuntu 18 I am receiving the following error when trying to launch a docker container that has a bind to a LVM (ext4) partition:
mkdir /storage: read-only file system
I have tried reinstalling the OS, reinstalling Docker and forcing the drive to mount as RW (everything that isn't docker can write to the drive).
The directory that is being bound is currently set to 777 permissions.
There seems to be almost no information available for this error.
I had same issue, but removed docker from snap and reinstall on following the official docker steps.
Remove docker from snap
snap remove docker
then remove the docker directory, and old version
rm -R /var/lib/docker
sudo apt-get remove docker docker-engine docker.io
install official docker: https://docs.docker.com/install/linux/docker-ce/ubuntu/
I hope this help for you!
Update 01/2021: while still pretty cool, Snaps don't always work. Specifically with the Docker Snap, it didn't work for Swarm mode, so I ditched it and installed Docker the recommended way.
Snaps are actually pretty cool, IMO, and think it's beneficial to run Docker within a Snap than installing it directly on the system. The fact that you're getting a read-only permissions error is a good thing. It means that a rogue container isn't able to wreak havoc on your base OS. That said, how to fix your issue.
The reason that this is coming up is that Snaps will expose the host OS as read-only so that Docker can see the host's files, but not modify them (hence the permission denied error). But there is a directory that the Docker Snap can write to: /var/snap/docker. Actually, a better directory that snap can write to is /home. I created /home/docker for container's to have persistent storage from the host system.
In your case, you wanted /storage to be writeable by Docker containers. I had a very similar use-case, which led me to this SO post. I solved this by mounting my storage within the docker snap directory /home/docker; the easiest example simply being a directory on the same filesystem:
mkdir -p /home/docker/<container name>/data
In my case, I created a ZFS dataset at the location above instead of simply mkdir'ing a directory.
Then, the container I ran could write to that with something like:
docker run -ti -v /home/docker/<container name>/data:/data [...]
Now you have the best of both worlds: Docker running in a contained Snap environment and persistent storage. 🙌🏽
To solve this, create/run you container with --privileged:
ex.:
docker run --privileged -i --name master --hostname k8s-master -d ubuntu:20.04
Disclaimer/Edit 2
Some years later, for everyone reading this question - If you are on Windows and want to use docker with linux containers, I highly recommend not using docker for windows at all and instead starting the entire docker environment inside a VM altogether. This Ext3 NTFS issue will break your neck on so many different levels that installing docker-machine might not even be worth the effort.
Edit:
I am using docker-machine which starts a boot2docker instance inside a Virtualbox VM with a shared folder on /c/Users from which you can mount volumes into your containers. The permissions of said volumes are the ones the question is about. The VMs are stored under /c/Users/tom/.docker/
I chose to use the docker-machine Virtualbox workflow over Hyper-V because I need VBox in my daily workflow and running Hyper-V and Virtualbox together on one system is not possible due to incompabilities between different Hypervisors.
Original question
I am currently trying to set up PHPMyAdmin in a container on windows but I can't change the permissions of the config.inc.php file.
I found: Cannot call chown inside Docker container (Docker for Windows) and thought this might be somewhat related but it appears to apply only to MongoDB.
This is my docker-compose.yml
version: "3"
services:
pma:
image: (secrect company registry)/phpmyadmin
ports:
- 9090:80
volumes:
- /c/Users/tom/projects/myproject/data/var/www/public/config.inc.php:/var/www/public/config.inc.php
now, when I docker exec -it [container] bash and change in the mounted directory, I try to run chmod on the config.inc.php but for some reason, it fails silently.
root#22a4bag43245: ls -la config.inc.php
-rw------- 1 root root 0 Aug 11 15:11 config.inc.php
root#22a4bag43245: chmod 655 config.inc.php
root#22a4bag43245: ls -la config.inc.php
-rw------- 1 root root 0 Aug 11 15:11 config.inc.php
Considering the linked answer, I thought I could just move the volume out of my Userhome but then vbox doesn't mount the folder at all.
How do I change the file permissions of /var/www/public/config.inc.php persistently?
I had the same problem of not being able to change ownership even after using chown. And as I researched, it was because of NTFS volumes being mounted inside ext filesystem. So I used another approach.
The volumes internal to docker are free from these problems. So you can mount your file on internal docker volume and then create a hard symlink to that file inside your local folder wherever you want:
sudo ln $(docker volume inspect --format '{{ .Mountpoint }}' <project_name>_<volume_name>) <absolute_path_of_destination>
This way you can have your files in desired place, inside docker and without any permission issues, and you will be able to modify the contents of file as in the normal volume mount due to hard symlink.
Here is a working implementation of this process which mounts and links a directory. In case you wanna know about the details, see possible fix section in issue.
EDIT
Steps to implement this approach:
Mount the concerned file in internal docker-volume(also known as named volumes).
Before making hardlink, make sure volumes and concerned file are present there. To ensure this, you should have run your container at least once before or if you want to automate this file creation, you can include a docker run which creates the required files and exits.
docker run --rm -itd \
-v "<Project_name>_<volume_name>:/absolute/path" \
<image> bash -c "touch /absolute/path/<my_file>"
This docker run will create volumes and required files. Here, container is my project name, by default, it is the name of the folder in which project is present and <volume_name> is the same as one which we want to use in our original container. <image> can be the same one which is already being used in your original containers.
Create a hardlink in your OS to the actual file location on your system. You can find the file location using docker volume inspect --format '{{ .Mountpoint }}' <project_name>_<volume_name>/<my_file>. Linux users can use ln in terminal and windows users can use mklink in command prompt.
In step 3 we have not used /absolute/path since the <volume_name> refers to that location already, and we just need to refer to the file.
Try one of the following:
If you can rebuild the image image: image: (secrect company registry)/docker-stretchimal-apache2-php7-pma then inside the docker file, add the following
USER root
RUN chmod 655 config.inc.php
Then you can rebuild the image and push it to the registry, and what you were doing should work. This should be your preferred solution, as you don't want to be manually changing the permissions everytime you start a new container
Try to exec using the user root explicitly
docker exec -it -u root [container] bash
I am trying to map a host folder to the guest in the same way that is easily accomplished on linux/mac via -v "$(pwd)":/code. I can't come up with a simple example to make this work with Windows Containers.
docker build -t="webdav" .
docker run --rm -it -v C:\junk:C:\code --name webdav webdav powershell
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: container f0fa313478fddb73e34d47699de0fc3c2a3bdb202ddcfc2a124c5c8b7523ec09 encountered an error during Start: failure in a Windows system call: The connection with the Virtual Machine hosting the container was closed. (0xc037010a).
I have tried countless other variations, and the accepted answer here gives the same error.
The docs seem to only refer to Docker Toolbox. The example only gives me invalid bind mount spec.
My Dockerfile
FROM microsoft/windowsservercore
RUN powershell -Command Add-WindowsFeature Web-Server
RUN powershell -Command mkdir /code
WORKDIR /code
ADD * /code/
OS Name: Microsoft Windows 10 Pro
OS Version: 10.0.14393 N/A Build 14393
Version 17.03.1-ce-win5 (10743)
Channel: stable
b18e2a5
Disclaimer: I originally posted this on the docker forums but haven't had any responses.
EDIT:
Found it. https://docs.docker.com/engine/reference/builder/#volume
"When using Windows-based containers, the destination of a volume inside the container must be one of: a non-existing or empty directory; or a drive other than C:"
Or here: https://docs.docker.com/engine/reference/commandline/run/#mount-volume--v---read-only
"The following examples will fail when using Windows-based containers, as the destination of a volume or bind-mount inside the container must be one of: a non-existing or empty directory; or a drive other than C:. Further, the source of a bind mount must be a local directory, not a file."
It strikes me that these are non-obvious places to document this difference. Where did you look for documentation of this issue? I'll add this there :)
Is there a general need for a summary of differences between Linux and Windows?
OLD ANSWER (for context)
Here's a step-by-step guide on mounting volumes with the GUI:
https://rominirani.com/docker-on-windows-mounting-host-directories-d96f3f056a2c
From reading through some other forum posts it sounds like special characters in passwords may trip things up.
If you are still having issues here is one thread you could read through:
https://github.com/docker/docker/issues/23992
Hope this helps!
I'm not sure if/where the moby repo docs publish to Docker docs, but this issue indicates that a volume cannot reference an existing folder in the container. In my example, I was first creating c:\code. If I change the command:
docker run --rm -it -v C:\junk:C:\code2 --name webdav webdav powershell
... it will create and mount c:\code2 in the container to point to c:\junk on the host.
When I run a container as a normal user I can map and modify directories owned by root on my host filesystem. This seems to be a big security hole. For example I can do the following:
$ docker run -it --rm -v /bin:/tmp/a debian
root#14da9657acc7:/# cd /tmp/a
root#f2547c755c14:/tmp/a# mv df df.orig
root#f2547c755c14:/tmp/a# cp ls df
root#f2547c755c14:/tmp/a# exit
Now my host filesystem will execute the ls command when df is typed (mostly harmless example). I cannot believe that this is the desired behavior, but it is happening in my system (debian stretch). The docker command has normal permissions (755, not setuid).
What am I missing?
Maybe it is good to clarify a bit more. I am not at the moment interested in what the container itself does or can do, nor am I concerned with the root access inside the container.
Rather I notice that anyone on my system that can run a docker container can use it to gain root access to my host system and read/write as root whatever they want: effectively giving all users root access. That is obviously not what I want. How to prevent this?
There are many Docker security features available to help with Docker security issues. The specific one that will help you is User Namespaces.
Basically you need to enable User Namespaces on the host machine with the Docker daemon stopped beforehand:
dockerd --userns-remap=default &
Note this will forbid the container from running in privileged mode (a good thing from a security standpoint) and restart the Docker daemon (it should be stopped before performing this command). When you enter the Docker container, you can restrict it to the current non-privileged user:
docker run -it --rm -v /bin:/tmp/a --user UID:GID debian
Regardless, try to enter the Docker container afterwards with your default command of
docker run -it --rm -v /bin:/tmp/a debian
If you attempt to manipulate the host filesystem that was mapped into a Docker volume (in this case /bin) where files and directories are owned by root, then you will receive a Permission denied error. This proves that User Namespaces provide the security functionality you are looking for.
I recommend going through the Docker lab on this security feature at https://github.com/docker/labs/tree/master/security/userns. I have done all of the labs and opened Issues and PRs there to ensure the integrity of the labs there and can vouch for them.
Access to run docker commands on a host is access to root on that host. This is the design of the tool since the functionality to mount filesystems and isolate an application requires root capabilities on linux. The security vulnerability here is any sysadmin that grants access to users to run docker commands that they wouldn't otherwise trust with root access on that host. Adding users to the docker group should therefore be done with care.
I still see Docker as a security improvement when used correctly, since applications run inside a container are restricted from what they can do to the host. The ability to cause damage is given with explicit options to running the container, like mounting the root filesystem as a rw volume, direct access to devices, or adding capabilities to root that permit escaping the namespace. Barring the explicit creation of those security holes, an application run inside a container has much less access than it would if it was run outside of the container.
If you still want to try locking down users with access to docker, there are some additional security features. User namespacing is one of those which prevents root inside of the container from having root access on the host. There's also interlock which allows you to limit the commands available per user.
You're missing that containers run as uid 0 internally by default. So this is expected. If you want to restrict the permission more inside the container, build it with a USER statement in Dockerfile. This will setuid to the named user at runtime, instead of running as root.
Note that the uid of this user it not necessarily predictable, as it is assigned inside the image you build, and it won't necessarily map to anything on the outside system. However, the point is, it won't be root.
Refer to Dockerfile reference for more information.