My docker file
FROM mcr.microsoft.com/playwright:focal as influencer-scraper
USER root
# Install tools as telnet
RUN apt-get update && apt-get install telnet -y
# RUN apk add chromium
RUN groupadd --gid 888 node \
&& useradd --uid 888 --gid node --shell /bin/bash --create-home node
USER node
WORKDIR /home/node
# Copy package.json and Yarn install (separate for cache)
COPY ./package.json ./
COPY ./yarn.lock ./
RUN yarn
# Copy everything and build
COPY . .
# Copy other config files
COPY ./.env ./.env
# Entry point
ENTRYPOINT ["yarn", "start"]
CMD ["--mongodb", "host.docker.internal:27017"]
However, after I login to the docker image, I found that all files are owned by root, which is creating trouble during the runtime
➜ influencer-scraper-js git:(master) ✗ docker run -it --entrypoint /bin/bash influencer-scraper:v0.1-6-gfe17ad4962-dirty
node#bce54c1024db:~$ ls -l
total 52
-rw-r--r--. 1 root root 542 Apr 16 04:15 Docker.md
-rw-r--r--. 1 root root 589 Apr 16 05:03 Dockerfile
-rw-r--r--. 1 root root 570 Apr 16 03:58 Makefile
-rw-r--r--. 1 root root 358 Apr 13 01:27 README.md
drwxr-xr-x. 1 root root 20 Apr 16 03:58 config
drwxr-xr-x. 1 root root 16 Apr 16 03:58 data
drwxr-xr-x. 1 root root 14 Apr 12 06:00 docker
-rw-r--r--. 1 root root 558 Apr 16 03:58 docker-compose.yml
drwxr-xr-x. 1 root root 140 Apr 13 01:27 generated
drwxr-xr-x. 1 root root 1676 Apr 16 04:47 node_modules
-rw-r--r--. 1 root root 583 Apr 16 03:58 package.json
drwxr-xr-x. 1 root root 34 Apr 13 01:27 proxy
drwxr-xr-x. 1 root root 40 Apr 13 01:27 src
-rw-r--r--. 1 root root 26230 Apr 16 03:58 yarn.lock
How can I resolve this? I would like the workdir to be still owned by user node.
Quoting Docker Documentation : https://docs.docker.com/engine/reference/builder/#copy
COPY has two forms:
COPY [--chown=<user>:<group>] <src>... <dest>
COPY [--chown=<user>:<group>] ["<src>",... "<dest>"]
If you do not specify any user in --chown, the default used is root
All new files and directories are created with a UID and GID of 0, unless the optional --chown flag specifies a given username, groupname, or UID/GID combination to request specific ownership of the copied content.
You can also try doing chown after copying.
chown root:node filename
The file listing you show looks almost correct to me. You want most of the files to be owned by root and not be world-writeable: in the event that there's some security issue or other bug in your code, you don't want that to accidentally overwrite your source files, static assets, or other content.
This means you need the actual writeable data to be stored in a different directory, and your listing includes a data directory which presumably serves this role. You can chown it in your Dockerfile.
For clarity, it helps to stay as the root user until the very end of the file, and then you can declare the alternate user to actually run the container.
# USER root (if required)
RUN chown node data
...
USER node
CMD ["yarn", "start"]
When you launch the container, you can mount a volume on that specific directory. This setup should work as-is with a named volume
docker run \
-v app_data:/home/node/data \
...
If you want/need to use a host directory to store the data, you also need to specify the host user ID that owns the directory (typically the current user). Again, the application code will be owned by root and world-readable, so this won't change; it's only the data directory whose contents and ownership matter.
docker run \
-u $(id -u) \
-v "$(pwd)/app_data:/home/node/data" \
...
(Do not use volumes to replace the application code or libraries in the container. In this particular case, doing that would obscure this configuration problem in the Dockerfile, and your container setup would fail when you tried to deploy to production without the local-build volumes.)
I have a volume which uses bind to share a local directory. Sometimes this directory doesn't exist and everything goes to shit. How can I tell docker-compose to look for the directory and use it if it exists or to continue without said volume if errors?
Volume example:
- type: bind
read_only: true
source: /srv/share/
target: /srv/share/
How can I tell docker-compose to look for the directory and use it if it exists or to continue without said volume if errors?
As far I am aware you can't do conditional logic to mount a volume, but i am getting around it in a project of mine, like this:
version: "2.1"
services:
elixir:
image: elixir:alpine
volumes:
- ${VOLUME_SOURCE:-/dev/null}:${VOLUME_TARGET:-/.devnull}:ro
Here I am using /dev/null as the fallback, but in my real project I just use an empty file to do the mapping.
This ${VOLUME_SOURCE:-/dev/null} is how bash works with default values for variables not set, and docker compose supports them.
Testing it without setting the env vars
$ sudo docker-compose run --rm elixir sh
/ # ls -al /.devnull
crw-rw-rw- 1 root root 1, 3 May 21 12:27 /.devnull
Testing it with the env vars set
Creating the .env file:
$ printf "VOLUME_SOURCE=./testing \nVOLUME_TARGET=/testing\n" > .env && cat .env
VOLUME_SOURCE=./testing
VOLUME_TARGET=/testing
Creating the volume for test purposes:
$ mkdir testing && touch testing/test.txt && ls -al testing
total 8
drwxr-xr-x 2 exadra37 exadra37 4096 May 22 13:12 .
drwxr-xr-x 3 exadra37 exadra37 4096 May 22 13:12 ..
-rw-r--r-- 1 exadra37 exadra37 0 May 22 13:12 test.txt
Running the container:
$ sudo docker-compose run --rm elixir sh
/ # ls -al /testing/
total 8
drwxr-xr-x 2 1000 1000 4096 May 22 12:01 .
drwxr-xr-x 1 root root 4096 May 22 12:07 ..
-rw-r--r-- 1 1000 1000 0 May 22 12:01 test.txt
/ #
I don't think there is a way to do that with the docker-compose syntax easily yet, here is how I went, but the container would not start at all if there is no volume.
check the launch command with a docker inspect on the unpatched container
change your command with something like this (here using egorive/seafile-mc:8.0.7-rpi on a raspberry pi, where the data is on an external disk that might not always be plugged):
volumes:
- '/data/seafile-data:/shared:Z'
command: sh -c "( [ -f /shared/.docker-volume-check ] || ( echo volume not mounted, not starting; sleep 60; exit 1 )) && exec /sbin/my_init -- /scripts/start.py"
restart: always
touch .docker-volume-check in the root of your volume
That way, you have a restartable container, that would fail and wait if the volume is not mounted. It also supports volume in a generic way: for instance, when you just create a new container that has not initialized its volume yet with a first setup, it will still boot as you're checking a file you created.
I want to give a docker container to my students such that they are able to conduct experiments. I thought I use the following dockerfile:
FROM jupyter/datascience-notebook:latest
ADD ./config.json /home/jovyan/.jupyter/jupyter_notebook_config.json
ADD ./books /home/jovyan/work
So, the standard container will include a few notebooks I have created and stored in the books folder. I then build and run this container locally with
#!/usr/bin/env bash
docker build -t aaa .
docker run --rm -p "8888:8888" -v $(pwd)/books:/home/joyvan/work aaa
I build the container aaa and share again the folder books with it (although books has been copied into the image at compile time). I now open the container on port 8888. I can edit the files in the /home/joyvan/work folder but this stuff is not getting transported back to the host. Something goes terrible wrong. Is it because I add the files during the docker build and then share them again in the -v ...?
I have played with various options. I have added the local user to the users group. I do chown on all files in books. All my files show up as root:root in the container. I am then joyvan in the container and do not have write access to those files. How would I make sure the files are owned by joyvan?
EDIT:
Some other elements :
tom#thomas-ThinkPad-T450s:~/babynames$ docker exec -it cranky_poincare /bin/bash
jovyan#5607ac2bcaae:~$ id
jovyan uid=1000(jovyan) gid=100(users) groups=100(users)
jovyan#5607ac2bcaae:~$ cd work/
jovyan#5607ac2bcaae:~/work$ ls
test.txt text2.txt
jovyan#5607ac2bcaae:~/work$ ls -ltr
total 4
-rw-rw-r-- 1 root root 5 Dec 12 19:05 test.txt
-rw-rw-r-- 1 root root 0 Dec 12 19:22 text2.txt
on the host:
tom#thomas-ThinkPad-T450s:~/babynames/books$ ls -ltr
total 4
-rw-rw-r-- 1 tom users 5 Dez 12 20:05 test.txt
-rw-rw-r-- 1 tom users 0 Dez 12 20:22 text2.txt
tom#thomas-ThinkPad-T450s:~/babynames/books$ id tom
uid=1001(tom) gid=1001(tom) groups=1001(tom),27(sudo),100(users),129(docker)
You can try:
FROM jupyter/datascience-notebook:latest
ADD ./config.json /home/jovyan/.jupyter/jupyter_notebook_config.json
ADD ./books /home/jovyan/work
RUN chown joyvan /books
if that user already exists, but with RUN you can execute all commands in your docker file.
docker build failed on windows 10,
After docker installed successfully, While building docker image using below command.
docker build -t drtuts:latest .
Facing below issue.
Kindly let me know if any one resolved same issue.
The problem is that the current user is not the owner of the directory.
I got the same problem in Ubuntu, this line solves the issue:
Ubuntu
sudo chown -R $USER <path-to-folder>
Source: Change folder permissions and ownership
Windows
This link shows how to do the same in Windows:
Take Ownership of a File / Folder through Command Prompt in Windows 10
Just create a new directory and enter it:
$ mkdir dockerfiles
$ cd dockerfiles
Create your file in that directory:
$ touch Dockerfile
Edit it and add the commands with vi:
$ vi Dockerfile
Fnally run it:
$ docker build -t tag .
Explanation of the problem
When you run the docker build command, the Docker client gathers all files that need to be sent to the Docker daemon, so it can build the image. This 'package' of files is called the context.
What files and directories are added to the context?
The context contains all files and subdirectories in the directory that you pass to the docker build command. For example, when you call docker build img:tag dir_to_build, the context will be everything inside dir_to_build.
If the Docker client does not have sufficient permissions to read some of the files in the context, you get the error checking context: 'can't stat ' <FILENAME> error.
There are two solutions to this problem:
Move your Dockerfile, together with the files that are needed to build your image, to a separate directory separate_dir that contains no other files/subdirectories. When you now call docker build img:tag separate_dir, the context will only contain the files that are actually required for building the image. (If the error still persists that means you need to change the permissions on your files so that the Docker client can access them).
Exclude files from the context using a .dockerignore file. Most of the time, this is probably what you want to be doing.
From the official Docker documentation:
Before the docker CLI sends the context to the docker daemon, it looks for a file named .dockerignore in the root directory of the context. If this file exists, the CLI modifies the context to exclude files and directories that match patterns in it.
To answer the question
I would create a .dockerignore file in the same directory as the Dockerfile: ~/.dockerignore with the following contents:
# By default, ignore everything
*
# Add exception for the directories you actually want to include in the context
!project-source-code
!project-config-dir
# source files
!*.py
!*.sh
Further reading:
Official Docker documentation
I found this blog very helpful
Docker grants read and write rights to only to the owner of the file, and sometimes the error will be thrown if the user trying to build is different from the owner.
You could create a docker group and add the users there.
in debian would be as follows
sudo groupadd docker
sudo usermod -aG docker $USER
I was also getting the same error message on Windows 10 Home version.
I resolved it by the following steps:
Create a directory called 'dockerfiles' under C:\Users\(username)
Keep the {dockerfile without any extension} under the newly created directory as mentioned in step (1).
Now run the command {from C:\Users\(username) directory}:
docker build -t ./dockerfiles
It worked like a breeze!
I was having the same issue but working with Linux
$ docker build -t foramontano/openldap-schema:0.1.0 --rm .
$ error checking context: 'can't stat '/home/isidoronm/foramontano/openldap_docker/.docker/config/cn=config''.
I was able to solve the problem with the issue... by including the directory referred in the log (/home/isidoronm/foramontano/openldap_docker/.docker) inside the .dockerignore file located in the directory i have the Dockerfile file.(/home/isidoronm/foramontano/openldap_docker )
isidoronm#vps811154:~/foramontano/openldap_docker$ ls -al
total 48
drwxrwxr-x 5 isidoronm isidoronm 4096 Jun 16 18:04 .
drwxrwxr-x 9 isidoronm isidoronm 4096 Jun 15 17:01 ..
drwxrwxr-x 4 isidoronm isidoronm 4096 Jun 16 17:08 .docker
-rw-rw-r-- 1 isidoronm isidoronm 43 Jun 13 17:25 .dockerignore
-rw-rw-r-- 1 isidoronm isidoronm 214 Jun 9 22:04 .env
drwxrwxr-x 8 isidoronm isidoronm 4096 Jun 13 17:37 .git
-rw-rw-r-- 1 isidoronm isidoronm 5 Jun 13 17:25 .gitignore
-rw-rw-r-- 1 isidoronm isidoronm 408 Jun 16 18:03 Dockerfile
-rw-rw-r-- 1 isidoronm isidoronm 1106 Jun 16 17:10 Makefile
-rw-rw-r-- 1 isidoronm isidoronm 18 Jun 13 17:36 README.md
-rw-rw-r-- 1 isidoronm isidoronm 1877 Jun 12 12:11 docker-compose.yaml
drwxrwxr-x 3 isidoronm isidoronm 4096 Jun 13 10:51 service
Maybe it's valid something similar in Windows 10.
I got the same problem in Ubuntu, I just added a sudo before docker build........
Here are my steps on Windows 10 that worked. Open Command Prompt Window as Administrator:
cd c:\users\ashok\Documents
mkdir dockerfiles
cd dockerfiles
touch dockerfile
notepad dockerfile # Write/paste here the contents of dockerfile
docker build -t jupyternotebook -f dockerfile .
Due to permission issue, this problem caused.
I recommend checking the permission of the respected user, which is accessible in Dockerfile.
Nothing wrong with any path.
I faced a similar situation on Windows 10 and was able to resolve it using the following approach:
Navigate to the directory where your Dockerfile is stored from your CLI (I used PowerShell).
Run the docker build . command
It seems like you are using the bash shell in Windows 10, when I tried using it, docker wasn't even recognized as a command (can check using docker --version).
On Windows 10
Opened command line
c:\users\User>
mkdir dockerfiles
cd dockerfiles
notepad Dockerfile.txt
//after notepad opens to type your docker commands and then save your
Dockerfile.
back to the command line type "python-hello-world" is an example.
docker image build -t python-hello-world -f ./Dockerfile.txt .
I am writing as it will be helpful to people who play with apparmor
I also got this problem on my Ubuntu machine. It happened because I ran "aa-genprof docker" command to scan "docker" command to create apparmor profile. So the scanned process created a file usr.bin.docker in directory "/etc/apparmor.d", which added some permissions for docker command. So after removing that file and rebooting the machine docker runs again perfectly.
If you arrive here with this issue on a Mac, for me I just had to cd back out of the directory containing the Dockerfile, then cd back in and rerun the command and it worked.
I had same issue :-
error checking context: 'no permission to read from '\?\C:\Users\userprofile \AppData\Local\AMD\DxCache\36068d231d1a87cd8b4bf677054f884d500b364064779e16..bin''.
it was keep barking this issue , there was no problem with permission tried to run th command and different switch but haven't worked.
created dockerfiles folder and put docker file there and ran command from the folder path
C:\Users\ShaikhNaushad\dockerfiles> docker build -t naushad/debian .
And it worked
I faced the same issue while trying to build a docker image from a Dockerfile placed in /Volumes/work. It worked fine when I created a folder dockerfiles within /Volumes/work and placed my Dockerfile in that.
Error ? : The folder or directory in that particular location that Docker/current user is trying to modify, has a "different owner"(remember only owners or creators of particular files/documents have the explicit rights to modify those same files).
Or it could also be that the file has already been created and thus this is throwing the Error when a new create statement for a similar file is issued.
Solution ? : Revert ownership of that particular folder and its contents to current user, this can be done in two ways;
sudo rm file (delete file) that particular file and the recreate it using the command "mkdir file".This way you will be the owner/creator of that file. Do this if the data in that file can be rebuilt or is not that crucial
Change the file permissions for that file, follow the following tutorial for linux users :https://www.tomshardware.com/how-to/change-file-directory-permissions-linux
In my case, I am using a different hard disk only for data, but I use another solid disk for the system, in my case Linux Mint. And the problem was solved when a I change the files to the system disk (solid disk).
PS: I used the command sudo chown -R $USER <path-to-folder> and the problem wasn't solved.
I am playing with a Dockerfile and I have this:
ARG PUID=1000
ARG PGID=1000
RUN groupadd -g $PGID docker-user && \
useradd -u $PUID -g docker-user -m docker-user && \
mkdir /home/docker-user/.composer
COPY container-files/home/docker-user/.composer/composer.json /home/docker-user/.composer
RUN chown -R docker-user:docker-user /home/docker-user/.composer
USER docker-user
RUN composer global install
But when I try to build the image it ends with the following error:
Step 6 : COPY container-files/home/docker-user/.composer/composer.json /home/docker-user/.composer
lstat container-files/home/docker-user/.composer/composer.json: no such file or directory
The file does exist on the host as per this output:
$ ls -la workspace/container-files/home/docker-user/.composer/
total 12
drwxrwxr-x 2 rperez rperez 4096 Oct 5 11:34 .
drwxrwxr-x 3 rperez rperez 4096 Oct 5 11:14 ..
-rw-rw-r-- 1 rperez rperez 208 Oct 5 11:20 composer.json
I have tried also this syntax:
COPY container-files /
But didn't work either. So I should ask: what's wrong? Why this keep failing once and once? What I am missing here?
The documentation addresses this with:
By default the docker build command will look for a Dockerfile at
the root of the build context. The -f, --file, option lets you
specify the path to an alternative file to use instead. This is useful
in cases where the same set of files are used for multiple builds. The
path must be to a file within the build context. If a relative path is specified then it is interpreted as relative to the root of the context.
In this case I think is
COPY workspace/container-files/home/docker-user/.composer/composer.json /home/docker-user/.composer