This question already has answers here:
Docker Copy and change owner
(2 answers)
Closed 4 months ago.
1. FROM node:16.17-alpine
2.
3. RUN addgroup app && adduser -S -G app app
4. USER app
5.
6. WORKDIR /app
7. COPY . .
I then run: docker build -t mytest .
[+] Building 3.3s (9/9) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 313B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/node:16.17-alpine 3.0s
=> [1/4] FROM docker.io/library/node:16.17-alpine#sha256:4d68856f48be7c73cd83ba8af3b6bae98f4679e14d1ff49e164625ae8831533a 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 40.15kB 0.0s
=> CACHED [2/4] RUN addgroup app && adduser -S -G app app 0.0s
=> CACHED [3/4] WORKDIR /app 0.0s
=> [4/4] COPY . . 0.0s
=> exporting to image 0.1s
=> => exporting layers 0.0s
=> => writing image sha256:aaeb83b6fde7be16f0c9a80d7f9a5af868a08ad603269051014716a32ca8f54c 0.0s
=> => naming to docker.io/library/mytest 0.0s
Now when I run it on a container: docker run -it mytest sh
and confirming I'm app (user).
/app $ whoami
app
Running ls -l command to view the content inside with their permission
/app $ ls -l
total 52
-rwxr-xr-x 1 root root 274 Oct 11 06:22 Dockerfile
-rwxr-xr-x 1 root root 309 Oct 10 12:47 index.js
-rwxr-xr-x 1 root root 39685 Oct 11 06:37 package-lock.json
-rwxr-xr-x 1 root root 211 Oct 10 13:45 package.json
the owner is root, but in my Dockerfile, line 3. I created a usergroup and user before running the copy command. I also set user to app on line 4. But why is the owner of the copied content is root but not app?
When I create a new file hello.ts there, the owner is now app
/app $ touch hello.ts
/app $ ls -l
total 52
-rwxr-xr-x 1 root root 274 Oct 11 06:22 Dockerfile
-rw-r--r-- 1 app app 0 Oct 11 06:42 hello.ts
-rwxr-xr-x 1 root root 309 Oct 10 12:47 index.js
-rwxr-xr-x 1 root root 39685 Oct 11 06:37 package-lock.json
-rwxr-xr-x 1 root root 211 Oct 10 13:45 package.json
/app $
How to set the user in build?
You need to change the owner for the COPY instruction as it run as admin unless specified otherwise.
COPY --chown=app:app . .
Related
Hi there I'm new to Docker and Dockerfiles in general,
However, I will need to create one in order to load an application on a server using WDL. With that said, there are few important aspects of this Dockerfile:
requires to create a Conda environment
in there I have to install Snakemake (through Mamba)
finally, I will need to git clone a repository and follow the steps to generate an executable for the application, later invoked by Snakemake
Luckily, it seems most of the pieces are already on dockerhub; correct if I'm wrong based on the script (see below)
# getting ubuntu base image & anaconda3 loaded
2 FROM ubuntu:latest
3 FROM continuumio/anaconda3:2021.05
4 FROM condaforge/mambaforge:latest
5 FROM snakemake/snakemake:stable
6
7 FROM node:alpine
8 RUN apk add --no-cache git
9 RUN apk add --no-cache openssh
10
11 MAINTAINER Name <email>
12
13 WORKDIR /home/xxx/Desktop/Pangenie
14
15 ## ACTUAL PanGenIe INSTALLATION
16 RUN git clone https://github.com/eblerjana/pangenie.git /home/xxx/Desktop/Pangenie
17 # create the environment
18 RUN conda env create -f environment.yml
19 # build the executable
20 RUN conda activate pangenie
21 RUN mkdir build; cd build; cmake .. ; make
First, I think that loading also Mamba and Snakemake would allow me to simply launch the application, as the tools are already set-up by the Dockerfile. Then, I ideally would like to build from the repository the executable, still I get an error at line 18 when I try to create a Conda environment, this is what I get:
[+] Building 1.7s (10/10) FINISHED
[internal] load build definition from Dockerfile
0.1s => => transferring dockerfile: 708B 0.1s => [internal] load .dockerignore 0.1s => => transferring context: 2B 0.1s => [internal] load metadata for docker.io/library/node:alpine 1.4s => [auth] library/node:pull token for registry-1.docker.io 0.0s => [stage-4 1/6] FROM docker.io/library/node:alpine#sha256:1a04e2ec39cc0c3a9657c1d6f8291ea2f5ccadf6ef4521dec946e522833e87ea
0.0s => CACHED [stage-4 2/6] RUN apk add --no-cache git 0.0s => CACHED [stage-4 3/6] RUN apk add --no-cache openssh 0.0s => CACHED [stage-4 4/6] WORKDIR /home/mat/Desktop/Pangenie 0.0s => CACHED [stage-4 5/6] RUN git clone https://github.com/eblerjana/pangenie.git /home/mat/Desktop/Pangenie
0.0s => ERROR [stage-4 6/6] RUN conda env create -f environment.yml 0.1s
[stage-4 6/6] RUN conda env create -f environment.yml:
#10 0.125 /bin/sh: conda: not found executor failed running [/bin/sh -c conda env create -f environment.yml]: exit code: 127
Now, I'm not really experienced as I said, and I spent some time looking for a solution and tried different things, but nothing worked out... if anyone has an idea or even suggesions on how to fix this Dockerfile, please let me know.
Thanks in advance!
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#understand-build-context
After read the above doc, I've tested in my local machine. My envs are like:
❯ sw_vers
ProductName: macOS
ProductVersion: 11.6.1
BuildVersion: 20G224
❯ docker version
Client:
Cloud integration: v1.0.22
Version: 20.10.13
API version: 1.41
Go version: go1.16.15
Git commit: a224086
Built: Thu Mar 10 14:08:44 2022
OS/Arch: darwin/amd64
Context: default
Experimental: true
Server: Docker Desktop 4.6.1 (76265)
Engine:
Version: 20.10.13
API version: 1.41 (minimum version 1.12)
Go version: go1.16.15
Git commit: 906f57f
Built: Thu Mar 10 14:06:05 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.10
GitCommit: 2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc
runc:
Version: 1.0.3
GitCommit: v1.0.3-0-gf46b6ba
docker-init:
Version: 0.19.0
GitCommit: de40ad0
I understood that following statements in the documentation means "Even if the Dockerfile does not use files in the build context directly, they still affect to the result image size":
Inadvertently including files that are not necessary for building an image results in a larger build context and larger image size. This can increase the time to build the image, time to pull and push it, and the container runtime size
So, the first test is building two images with the same Dockerfile, one in the build context with the large file and another is not:
❯ cat Dockefile
FROM alpine:3.15.4
RUN ["echo", "a"]
❯ mkdir -p contexts/small contexts/large
❯ touch contexts/small/file
❯ dd if=/dev/urandom of=contexts/large/file bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.404337 s, 259 MB/s
❯ ll **/*/file --block-size=M
-rw-r--r-- 1 hansuk 100M 4 12 14:15 contexts/large/file
-rw-r--r-- 1 hansuk 0M 4 12 14:15 contexts/small/file
❯ tree -a .
.
├── Dockefile
└── contexts
├── large
│ └── file
└── small
└── file
3 directories, 3 files
❯ docker build --no-cache -f Dockefile -t small contexts/small
[+] Building 3.0s (6/6) FINISHED
=> [internal] load build definition from Dockefile 0.0s
=> => transferring dockerfile: 78B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:3.15.4 2.3s
=> CACHED [1/2] FROM docker.io/library/alpine:3.15.4#sha256:4edbd2beb5f78b1014028f4fbb99f3237d9561100b6881aa 0.0s
=> [2/2] RUN ["echo", "a"] 0.4s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:695a2b36e3b6ec745fddb3d499acda249f7235917127cf6d68c93cc70665a1dc 0.0s
=> => naming to docker.io/library/small 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
❯ docker build --no-cache -f Dockefile -t large contexts/large
[+] Building 1.4s (6/6) FINISHED
=> [internal] load build definition from Dockefile 0.0s
=> => transferring dockerfile: 78B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:3.15.4 1.0s
=> CACHED [1/2] FROM docker.io/library/alpine:3.15.4#sha256:4edbd2beb5f78b1014028f4fbb99f3237d9561100b6881aa 0.0s
=> [2/2] RUN ["echo", "a"] 0.3s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:7c48ae10200ceceff83c6619e9e74d76b7799032c53fba57fa8adcbe54bb5cce 0.0s
=> => naming to docker.io/library/large 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
❯ docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
large latest 7c48ae10200c 7 seconds ago 5.57MB
small latest 695a2b36e3b6 15 seconds ago 5.57MB
The sizes of result images are exactly same and no difference in the size of 'transferring context'(This message is different from written in the documentation, "Sending build context to Docker daemon ...", but I guess it's about the BuildKit upgraded)
Then, I changed the Dockerfile to use the file for second test:
❯ cat Dockefile
FROM alpine:3.15.4 COPY file /file
❯ docker build --no-cache -f Dockefile -t small contexts/small
# ... Eliding unnecessary logs
=> CACHED [1/2] FROM docker.io/library/alpine:3.15.4#sha256:4edbd2beb5f78b1014028f4fbb99f3237d9561100b6881aabbf5acce2c4 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 25B 0.0s
=> [2/2] COPY file /file 0.0s
❯ docker build --no-cache -f Dockefile -t large contexts/large
# ... Eliding unnecessary logs
=> [internal] load build context 4.1s
=> => transferring context: 104.88MB 4.0s
=> CACHED [1/2] FROM docker.io/library/alpine:3.15.4#sha256:4edbd2beb5f78b1014028f4fbb99f3237d9561100b6881aabbf5acce2c4 0.0s
=> [2/2] COPY file /file
❯ docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
large latest 426740a3fa3c 7 minutes ago 110MB
small latest 33a32d1e5aab 7 minutes ago 5.57MB
The result sizes are different and it looks like by the file size. But I noticed with more tries to build "large" image, no transferring occured for the same file(context), even with the --no-cache option:
# Run a build again after the previous build
❯ docker build --no-cache -f Dockefile -t large contexts/large
# ... Build context is very small
=> [internal] load build context 0.0s
=> => transferring context: 28B 0.0s
=> CACHED [1/2] FROM docker.io/library/alpine:3.15.4#sha256:4edbd2beb5f78b1014028f4fbb99f3237d9561100b6881aabbf5acce2c4 0.0s
=> [2/2] COPY file /file 0.6s
# ...
# Recreate the file
❯ dd if=/dev/urandom of=contexts/large/file bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.374838 s, 280 MB/s
❯ docker build --no-cache -f Dockefile -t large contexts/large
# ... Then it transfer the new build context(file)
=> [internal] load build context 2.7s
=> => transferring context: 104.88MB 2.7s
=> CACHED [1/2] FROM docker.io/library/alpine:3.15.4#sha256:4edbd2beb5f78b1014028f4fbb99f3237d9561100b6881aabbf5acce2c4 0.0s
=> [2/2] COPY file /file 0.7s
Sum up my questions:
A size of build contexts only affects to the result image when any context, a file or a directory, is used on the Dockerfile? e.g. COPY build/context /here
Is there optimization options or improvements from when the documentation written?
A "build context" is cached (by buildkit or runtime)? I mean not the image layers.
Update
Sending the whole build context can reproduce in the legacy docker builder:
# Create a 1GB file in `small' image build context
❯ dd if=/dev/urandom of=contexts/small/dummy bs=1M count=1024
# Measurement
❯ time sh -c "DOCKER_BUILDKIT=0 docker build -t small -f Dockerfile contexts/small/"
Sending build context to Docker daemon 1.074GB
Step 1/2 : FROM alpine:3.15.4
---> 0ac33e5f5afa
Step 2/2 : COPY file /file
---> 5b93097102e3
Successfully built 5b93097102e3
Successfully tagged small:latest
real 0m26.639s
user 0m2.809s
sys 0m4.557s
❯ time sh -c "DOCKER_BUILDKIT=0 docker build -t large -f Dockerfile contexts/large/"
Sending build context to Docker daemon 10.49MB
Step 1/2 : FROM alpine:3.15.4
---> 0ac33e5f5afa
Step 2/2 : COPY file /file
---> Using cache
---> 3e24b0f37389
Successfully built 3e24b0f37389
Successfully tagged large:latest
real 0m0.655s
user 0m0.227s
sys 0m0.161s
The question for now is how the Buildkit opimize it.
A size of build contexts only affects to the result image when any context, a file or a directory, is used on the Dockerfile? e.g. COPY build/context /here
Is there optimization options or improvements from when the documentation written?
The image will only increase when you copy the file from the build context into the image. However, many images include entire directories without realizing the contents they included. Excluding specific files or subdirectories with a .dockerignore would shrink the image in those cases.
A "build context" is cached (by buildkit or runtime)? I mean not the image layers.
Buildkit changes this process dramatically and the documentation was written from the concept of the classic build tooling. With buildkit, previous versions of the context are cached, and only the files explicitly copied into the image are fetched using something similar to rsync to update it's cache. Note that this doesn't apply when building in ephemeral environments, like a CI server, that creates a new buildkit cache per build.
This question already has answers here:
Why is docker build not showing any output from commands?
(6 answers)
Closed 11 months ago.
The answers here don't seem to work. The answer here also doesn't work. I suspect something has changed about Docker's build engine since then.
My Dockerfile:
FROM node:16.14.2-alpine
WORKDIR /usr/src/app
COPY package.json yarn.lock ./
RUN yarn
COPY dist .
EXPOSE $SEEDSERV_PORT
RUN pwd
RUN echo "output"
RUN ls -alh
RUN contents="$(ls -1 /usr/src/app)" && echo $contents
# CMD ["node","server.js"]
ENTRYPOINT ["tail", "-f", "/dev/null"]
Which gives this output from build:
✗ docker build --progress auto --build-arg SEEDSERV_PORT=9999 -f build/api/Dockerfile .
[+] Building 2.1s (14/14) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/node:16.14.2-alpine 1.9s
=> [internal] load build context 0.0s
=> => transferring context: 122B 0.0s
=> [1/9] FROM docker.io/library/node:16.14.2-alpine#sha256:da7ef512955c906b6fa84a02295a56d0172b2eb57e09286ec7abc02cfbb4c726 0.0s
=> CACHED [2/9] WORKDIR /usr/src/app 0.0s
=> CACHED [3/9] COPY package.json yarn.lock ./ 0.0s
=> CACHED [4/9] RUN yarn 0.0s
=> CACHED [5/9] COPY dist . 0.0s
=> CACHED [6/9] RUN pwd 0.0s
=> CACHED [7/9] RUN echo "output" 0.0s
=> CACHED [8/9] RUN ls -alh 0.0s
=> CACHED [9/9] RUN contents="$(ls -1 /usr/src/app)" && echo $contents 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:d1dd7ac452ecacc803eed2bb1deff654c3296a5576b6f418dbd07c5f2e644f1a 0.0s
Adding --progress plain gives slightly different output but not what I'm looking for, e.g.:
#11 [7/9] RUN echo "output"
#11 sha256:634e07d201926b0f70289515fcf4a7303cac3658aeddebfa9552fc3054ed4ace
#11 CACHED
How can I get a directory listing during build in 20.10.3? I can exec into the running container but that's a lot more work.
If your build is cached, there's no output from the run to show. You need to include --no-cache to run the command again for any output to display, and also include --progress plain to output to the console.
I'm trying to get the version number of the latest release of a Github. (Stored in the LATEST_VERSION argument). Using this, I want to download a particular file from a GitHub repository (Using the ADD) command, however I am getting an error Error Image Screenshot
FROM adoptopenjdk/openjdk11:jdk-11.0.6_10-alpine
RUN mkdir -p /home/app
ARG ORG_NAME=archu0212
ARG REPO_NAME=Test
ARG LATEST_VERSION=$(curl -s https://api.github.com/repos/${ORG_NAME}/${REPO_NAME}/releases/latest | grep "tag_name" | cut -d'v' -f2 | cut -d'"' -f1)
ADD https://github.com/${ORG_NAME}/${REPO_NAME}/releases/download/v${LATEST_VERSION}/MainTestFile.jar /home/app
ENTRYPOINT ["java", "-jar", "/home/app"]
docker build -t test-image .
Building 2.0s (6/7)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 649B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/adoptopenjdk/openjdk11:jdk-11.0.6_10-alpine 1.4s
=> [1/3] FROM docker.io/adoptopenjdk/openjdk11:jdk-11.0.6_10-alpine#sha256:a4e96cebf2f00b354b6f935560d4e64ad24435af77322cdf538975962d7e17d3 0.0s
=> ERROR https://github.com/archu0212/Test/releases/download/v$(curl/MainTestFile.jar 0.3s
=> CACHED [2/3] RUN mkdir -p /home/app 0.0s
------
> https://github.com/archu0212/Test/releases/download/v$(curl/MainTestFile.jar:
Any help would be appreciated. Stuck on this for a week.
I'm trying to build an image from CentOS 6.9. Using this Dockerfile:
FROM centos:6.9
RUN ls
But it keeps failing with exit code 139 with the following output:
$ docker build -t centos-6.9 .
[+] Building 1.1s (7/7) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 72B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/centos:6.9 0.6s
=> [internal] load build context 0.1s
=> => transferring context: 72B 0.0s
=> CACHED [1/3] FROM docker.io/library/centos:6.9#sha256:6fff0a9edc920968351eb357c5b84016000fec6956e6d745f695e5a34f18ecd2 0.0s
=> [2/3] COPY . . 0.0s
=> ERROR [3/3] RUN ls 0.3s
------
> [3/3] RUN ls:
------
executor failed running [/bin/sh -c ls]: exit code: 139
I'm running:
Windows 10 Enterprise Version 2004
Docker Desktop 3.0.0
This appears to be an issue with WSL 2 with older base images, not docker or the image itself.
Create %userprofile%\.wslconfig file.
Add the following:
[wsl2]
kernelCommandLine = vsyscall=emulate
Restart WSL. wsl --shutdown
Restart Docker Desktop.
References:
https://github.com/microsoft/WSL/issues/4694#issuecomment-556095344
https://github.com/docker/for-win/issues/7284#issuecomment-646910923
https://github.com/microsoft/WSL/issues/4694#issuecomment-558335829