I have been working with a docker container for a few months now and was unaware of the fact that everything I was creating (folders, files) were created under the root user of my container. Now I want to reclaim ownership over all of these files so that I can have the permissions to move or write into them while I am outside of the container.
To make it a bit more concrete/clear, I have a local user named johndoe, and a local folder under the path of /home/johndoe/pythoncodes which is owned by johndoe. I mount this local folder to my docker container when I run the command
docker run -v /home/johndoe/pythoncodes:/home/johndoe/pythoncodes ...
Then when inside my container, I created a folder at /home/johndoe/pythoncodes/ProjectRepo. ProjectRepo is now owned by root from the container and so when I leave the container and go back to being the johndoe user, I no longer have the permissions to do anything with this folder (e.g. if I try to run git init I get a permission error that doesn't allow the creation of the .git folder.
I have seen answers on how to create a container that logs me in as my local user and have gotten this to work as well by using the adduser flag, but this only seem helpful for creating new files and doesn't help me with all of these files that have been already created as root.
but this only seem helpful for creating new files and doesn't help me with all of these files that have been already created as root
You could directly use chown from within the docker container to change the ownership of these bind mounts. But for this to work you will need to mount two folders which contain the username and password information for your user, /etc/passwd and /etc/group (below, :ro means 'read-only').
$ docker run -idt -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro --name try ubuntu:16.04 /bin/bash
$ docker exec -it try mkdir -p /tmp/abc/newfolder
$ cd abc
$ ls -alh
total 12K
drwxr-xr-x 3 atg atg 4.0K Jul 7 16:43 .
drwxr-xr-x 60 atg atg 4.0K Jul 7 16:42 ..
drwxr-xr-x 2 root root 4.0K Jul 7 16:43 newfolder
$ sudo chown -R atg:atg .
[sudo] password for atg:
$ ls -alh
total 12K
drwxr-xr-x 3 atg atg 4.0K Jul 7 16:43 .
drwxr-xr-x 60 atg atg 4.0K Jul 7 16:42 ..
drwxr-xr-x 2 atg atg 4.0K Jul 7 16:43 newfolder
Inside of docker image has several files in /tmp directory.
Example
/tmp # ls -al
total 4684
drwxrwxrwt 1 root root 4096 May 19 07:09 .
drwxr-xr-x 1 root root 4096 May 19 08:13 ..
-rw-r--r-- 1 root root 156396 Apr 24 07:12 6359688847463040695.jpg
-rw-r--r-- 1 root root 150856 Apr 24 06:46 63596888545973599910.jpg
-rw-r--r-- 1 root root 142208 Apr 24 07:07 63596888658550828124.jpg
-rw-r--r-- 1 root root 168716 Apr 24 07:12 63596888674472576435.jpg
-rw-r--r-- 1 root root 182211 Apr 24 06:51 63596888734768961426.jpg
-rw-r--r-- 1 root root 322126 Apr 24 06:47 6359692693565384673.jpg
-rw-r--r-- 1 root root 4819 Apr 24 06:50 635974329998579791105.png
When I type the command to run this image -> container.
sudo docker run -v /home/media/simple_dir2:/tmp -d simple_backup
Expected behavior is if I run ls -al /home/media/simple_dir2
then the files show up.
But actual behavior is nothing exists in /home/media/simple_dir2.
On the other hand, if I run the same image without the volume option such as:
sudo docker run -d simple_backup
And enter that container using:
sudo docker exec -it <simple_backup container id> /bin/sh
ls -al /tmp
Then the files exist.
TL;DR
I want to mount a volume (directory) on the host, and have it filled with the files which are inside of the docker image.
My env
Ubuntu 18.04
Docker 19.03.6
From: https://docs.docker.com/storage/bind-mounts/
Mount into a non-empty directory on the container
If you bind-mount into a non-empty directory on the container, the directory’s existing contents are obscured by the bind mount. This can be beneficial, such as when you want to test a new version of your application without building a new image. However, it can also be surprising and this behavior differs from that of docker volumes.
"So, if host os's directory is empty, then container's directory will override is that right?"
Nope, it doesn't compare them for which one has files; it just overrides the folder on the container with the one on the host no matter what.
I have an NFS share with the following properties:
Mounted on my host on /nfs/external_disk
Owner user is test_user with UID 1234
Group is test_group with GID 2222
Permissions is 750
I have a small Dockerfile with the following content
ARG tag=lts
from jenkins/jenkins:${tag}
user root
# Create a new user and new group that matches what is on the host.
ARG username=test_user
ARG groupname=test_group
ARG uid=1234
ARG gid=2222
RUN groupadd -g ${gid} ${groupname} && \
mkdir -p /users && \
useradd -l -m -u ${uid} -g ${groupname} -s /bin/bash -d /users/${username} ${username}
user ${username}
After building the image (named custom_jenkins), and when I run the following command, the container is started properly and I see the original Jenkins homer stuff now copied to the share.
docker run -td --rm -v /nfs/external_disk:/var/jenkins_home custom_jenkins
However if I want to mount a sub-directory of the NFS share, say ${NFS_SHARE}/jenkins_home, then I get an error:
docker run -td --rm -v /nfs/external_disk/jenkins_home:/var/jenkins_home custom_jenkins
docker: Error response from daemon: error while creating mount source path '/nfs/external_disk/jenkins_home': mkdir /nfs/external_disk/jenkins_home: permission denied.
Now even if I attempt to create the sub-directory myself before starting the container, I still get the same error. Even when I set the permissions of the sub-directory to be 777.
Note that I am running as test_user which has the same UID/GID as in the container and it actually owns the NFS share.
I have a feeling that when docker attempts to create a sub-directory, it attempts to create it as some different user (e.g. the "docker" user) which causes it to fail while creating the folder since it has no access inside the share.
Can anyone help? thanks in advance.
I tried to reproduce. Works just fine for me. Perhaps I am missing some constraint. Hope this helps anyway. Note at step 6 the owner and the group for the file that I created from the container. This might answer one of your questions.
Step 1: I created a NFS share somewhere in my LAN
Step 2: I mounted the share on the machine that's running the docker engine
sudo mount 192.168.0.xxx:/i-data/b4024d5b/nfs/NFS /mnt/nsa320/
neo#neo-desktop:nsa320$ mount | grep NFS
192.168.0.xxx:/i-data/b4024d5b/nfs/NFS on /mnt/nsa320 type nfs (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.0.xxx,mountvers=3,mountport=3775,mountproto=udp,local_lock=none,addr=192.168.0.xxx)
Step 3: I created some sample files and a sub-directory:
neo#neo-desktop:nsa320$ ls -la /mnt/nsa320/
total 12
drwxrwxrwx 3 root root 4096 Jul 21 22:54 .
drwxr-xr-x 3 root root 4096 Jul 21 22:41 ..
-rw-r--r-- 1 neo neo 0 Jul 21 22:45 dummyFile
-rw-r--r-- 1 root root 0 Jul 21 22:53 fileCreatedFromContainer << THIS WAS CREATED FROM A CONTAINER THAT WAS NOT LAUNCHED WITH THE --user OPTION
drwxr-xr-x 2 neo neo 4096 Jul 21 22:54 subfolder
Step 4: Launched a dummy container and mounted the sub-directory (1000 is the UID of the user neo in the my OS):
docker run -d -v /mnt/nsa320/subfolder:/var/externalMount --user 1000 alpine tail -f /dev/null
Step 5: Connected in container to check the mount(I can read and write in the subfolder located on the NFS)
neo#neo-desktop:nsa320$ docker exec -ti ded1dc79773e sh
/ $ ls /var/externalMount/
fileInSubfolder
/ $ touch /var/externalMount/fileInSubfolderCreatedFromContainer
Step 6: Back on the host, to whom does the file that I created from the container belongs to:
neo#neo-desktop:nsa320$ ls -la /mnt/nsa320/subfolder/
total 8
drwxr-xr-x 2 neo neo 4096 Jul 21 23:23 .
drwxrwxrwx 3 root root 4096 Jul 21 22:54 ..
-rw-r--r-- 1 neo neo 0 Jul 21 22:54 fileInSubfolder
-rw-r--r-- 1 neo root 0 Jul 21 23:23 fileInSubfolderCreatedFromContainer
Maybe off-topic: whoami executed in the container returns just the UID:
$ whoami
whoami: unknown uid 1000
I have build a docker image using the Dockerfile:
--Dockerfile
FROM scratch
ADD archlinux.tar /
ENV LANG=en_US.UTF-8
CMD ["/usr/bin/bash"]
--building the docker image:
docker build -t archlinux/base .
then checking the images:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
archlinux/base latest 7f4e7832243a 43 minutes ago 399MB
then go into the overlay2 folder and check what happens
root# cd var/lib/docker/overlay2
# ls -al
0d5db16fa33657d952e4d7921d9239b5a17ef579e03ecdd5046b63fc47d15038
now i try to run:
$ docker run -it archlinux/base
Now check the /var/lib/overlay2 folder
# ls -al
total 24
drwx------ 6 root root 4096 Mar 3 15:58 .
drwx--x--x 15 simha users 4096 Mar 3 07:25 ..
drwx------ 3 root root 4096 Mar 3 16:01 0d5db16fa33657d952e4d7921d9239b5a17ef579e03ecdd5046b63fc47d15038
drwx------ 4 root root 4096 Mar 3 16:01 500ef7ee5672b73c778e2080dda0ad7a9101d6b65e5bdb0b52f4e5d2f22fa2b3
drwx------ 4 root root 4096 Mar 3 15:58 500ef7ee5672b73c778e2080dda0ad7a9101d6b65e5bdb0b52f4e5d2f22fa2b3-init
drwx------ 2 root root 4096 Mar 3 15:58 l
Now i see more folders.
Why there was only one folder before the run and later it shows many folders in the overlay2.
If the check the images using docker command it shows the same as previous:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
archlinux/base latest 7f4e7832243a 43 minutes ago 399MB
How to understand the image and their layers in overlay2
First note that the contents of the /var/lib/docker/ directory is for the internal soup of Docker and should not be messed with.
In that directory, the contents of the overlay2 directory is to store the docker layers that constitute your docker images and containers. What is important to remember is that overlay2 is a file system using union mounts. In short, it will merge two folders into one. So when using the filesystem you might see one folder, but under the hood there are more. This is how docker makes layers.
docker build failed on windows 10,
After docker installed successfully, While building docker image using below command.
docker build -t drtuts:latest .
Facing below issue.
Kindly let me know if any one resolved same issue.
The problem is that the current user is not the owner of the directory.
I got the same problem in Ubuntu, this line solves the issue:
Ubuntu
sudo chown -R $USER <path-to-folder>
Source: Change folder permissions and ownership
Windows
This link shows how to do the same in Windows:
Take Ownership of a File / Folder through Command Prompt in Windows 10
Just create a new directory and enter it:
$ mkdir dockerfiles
$ cd dockerfiles
Create your file in that directory:
$ touch Dockerfile
Edit it and add the commands with vi:
$ vi Dockerfile
Fnally run it:
$ docker build -t tag .
Explanation of the problem
When you run the docker build command, the Docker client gathers all files that need to be sent to the Docker daemon, so it can build the image. This 'package' of files is called the context.
What files and directories are added to the context?
The context contains all files and subdirectories in the directory that you pass to the docker build command. For example, when you call docker build img:tag dir_to_build, the context will be everything inside dir_to_build.
If the Docker client does not have sufficient permissions to read some of the files in the context, you get the error checking context: 'can't stat ' <FILENAME> error.
There are two solutions to this problem:
Move your Dockerfile, together with the files that are needed to build your image, to a separate directory separate_dir that contains no other files/subdirectories. When you now call docker build img:tag separate_dir, the context will only contain the files that are actually required for building the image. (If the error still persists that means you need to change the permissions on your files so that the Docker client can access them).
Exclude files from the context using a .dockerignore file. Most of the time, this is probably what you want to be doing.
From the official Docker documentation:
Before the docker CLI sends the context to the docker daemon, it looks for a file named .dockerignore in the root directory of the context. If this file exists, the CLI modifies the context to exclude files and directories that match patterns in it.
To answer the question
I would create a .dockerignore file in the same directory as the Dockerfile: ~/.dockerignore with the following contents:
# By default, ignore everything
*
# Add exception for the directories you actually want to include in the context
!project-source-code
!project-config-dir
# source files
!*.py
!*.sh
Further reading:
Official Docker documentation
I found this blog very helpful
Docker grants read and write rights to only to the owner of the file, and sometimes the error will be thrown if the user trying to build is different from the owner.
You could create a docker group and add the users there.
in debian would be as follows
sudo groupadd docker
sudo usermod -aG docker $USER
I was also getting the same error message on Windows 10 Home version.
I resolved it by the following steps:
Create a directory called 'dockerfiles' under C:\Users\(username)
Keep the {dockerfile without any extension} under the newly created directory as mentioned in step (1).
Now run the command {from C:\Users\(username) directory}:
docker build -t ./dockerfiles
It worked like a breeze!
I was having the same issue but working with Linux
$ docker build -t foramontano/openldap-schema:0.1.0 --rm .
$ error checking context: 'can't stat '/home/isidoronm/foramontano/openldap_docker/.docker/config/cn=config''.
I was able to solve the problem with the issue... by including the directory referred in the log (/home/isidoronm/foramontano/openldap_docker/.docker) inside the .dockerignore file located in the directory i have the Dockerfile file.(/home/isidoronm/foramontano/openldap_docker )
isidoronm#vps811154:~/foramontano/openldap_docker$ ls -al
total 48
drwxrwxr-x 5 isidoronm isidoronm 4096 Jun 16 18:04 .
drwxrwxr-x 9 isidoronm isidoronm 4096 Jun 15 17:01 ..
drwxrwxr-x 4 isidoronm isidoronm 4096 Jun 16 17:08 .docker
-rw-rw-r-- 1 isidoronm isidoronm 43 Jun 13 17:25 .dockerignore
-rw-rw-r-- 1 isidoronm isidoronm 214 Jun 9 22:04 .env
drwxrwxr-x 8 isidoronm isidoronm 4096 Jun 13 17:37 .git
-rw-rw-r-- 1 isidoronm isidoronm 5 Jun 13 17:25 .gitignore
-rw-rw-r-- 1 isidoronm isidoronm 408 Jun 16 18:03 Dockerfile
-rw-rw-r-- 1 isidoronm isidoronm 1106 Jun 16 17:10 Makefile
-rw-rw-r-- 1 isidoronm isidoronm 18 Jun 13 17:36 README.md
-rw-rw-r-- 1 isidoronm isidoronm 1877 Jun 12 12:11 docker-compose.yaml
drwxrwxr-x 3 isidoronm isidoronm 4096 Jun 13 10:51 service
Maybe it's valid something similar in Windows 10.
I got the same problem in Ubuntu, I just added a sudo before docker build........
Here are my steps on Windows 10 that worked. Open Command Prompt Window as Administrator:
cd c:\users\ashok\Documents
mkdir dockerfiles
cd dockerfiles
touch dockerfile
notepad dockerfile # Write/paste here the contents of dockerfile
docker build -t jupyternotebook -f dockerfile .
Due to permission issue, this problem caused.
I recommend checking the permission of the respected user, which is accessible in Dockerfile.
Nothing wrong with any path.
I faced a similar situation on Windows 10 and was able to resolve it using the following approach:
Navigate to the directory where your Dockerfile is stored from your CLI (I used PowerShell).
Run the docker build . command
It seems like you are using the bash shell in Windows 10, when I tried using it, docker wasn't even recognized as a command (can check using docker --version).
On Windows 10
Opened command line
c:\users\User>
mkdir dockerfiles
cd dockerfiles
notepad Dockerfile.txt
//after notepad opens to type your docker commands and then save your
Dockerfile.
back to the command line type "python-hello-world" is an example.
docker image build -t python-hello-world -f ./Dockerfile.txt .
I am writing as it will be helpful to people who play with apparmor
I also got this problem on my Ubuntu machine. It happened because I ran "aa-genprof docker" command to scan "docker" command to create apparmor profile. So the scanned process created a file usr.bin.docker in directory "/etc/apparmor.d", which added some permissions for docker command. So after removing that file and rebooting the machine docker runs again perfectly.
If you arrive here with this issue on a Mac, for me I just had to cd back out of the directory containing the Dockerfile, then cd back in and rerun the command and it worked.
I had same issue :-
error checking context: 'no permission to read from '\?\C:\Users\userprofile \AppData\Local\AMD\DxCache\36068d231d1a87cd8b4bf677054f884d500b364064779e16..bin''.
it was keep barking this issue , there was no problem with permission tried to run th command and different switch but haven't worked.
created dockerfiles folder and put docker file there and ran command from the folder path
C:\Users\ShaikhNaushad\dockerfiles> docker build -t naushad/debian .
And it worked
I faced the same issue while trying to build a docker image from a Dockerfile placed in /Volumes/work. It worked fine when I created a folder dockerfiles within /Volumes/work and placed my Dockerfile in that.
Error ? : The folder or directory in that particular location that Docker/current user is trying to modify, has a "different owner"(remember only owners or creators of particular files/documents have the explicit rights to modify those same files).
Or it could also be that the file has already been created and thus this is throwing the Error when a new create statement for a similar file is issued.
Solution ? : Revert ownership of that particular folder and its contents to current user, this can be done in two ways;
sudo rm file (delete file) that particular file and the recreate it using the command "mkdir file".This way you will be the owner/creator of that file. Do this if the data in that file can be rebuilt or is not that crucial
Change the file permissions for that file, follow the following tutorial for linux users :https://www.tomshardware.com/how-to/change-file-directory-permissions-linux
In my case, I am using a different hard disk only for data, but I use another solid disk for the system, in my case Linux Mint. And the problem was solved when a I change the files to the system disk (solid disk).
PS: I used the command sudo chown -R $USER <path-to-folder> and the problem wasn't solved.