I am trying to build an image with docker containing some pretty simple packages.
Here's the requirements file:
autoimpute==0.12.2
numpy==1.19.2
fuzzywuzzy==0.18.0
pymongo==3.11.4
boto3==1.18.65
pandas==1.1.3
pytest==0.0.0
scikit_learn==1.0
Dockerfile:
FROM public.ecr.aws/lambda/python:3.8
COPY app.py requirements.txt ./
COPY data data/
COPY models models/
COPY models_autoML models_autoML/
RUN apt install swig
RUN python3.8 -m pip install -r requirements.txt -t .
COPY dsci1.py ./
# Command can be overwritten by providing a different command in the template directly.
CMD ["app.lambda_handler"]
I get the following error, which is weird since I pulled a project from another collaborator and it is supposed to work. Why is it failing, and what can I do to fix it?
#10 18.88 error: subprocess-exited-with-error
#10 18.88
#10 18.88 × Running setup.py install for pyrfr did not run successfully.
#10 18.88 │ exit code: 1
#10 18.88 ╰─> [7 lines of output]
#10 18.88 running install
#10 18.88 running build_ext
#10 18.88 building 'pyrfr._regression' extension
#10 18.88 swigging pyrfr/regression.i to pyrfr/regression_wrap.cpp
#10 18.88 swig -python -c++ -modern -py3 -features nondynamic -I./include -o pyrfr/regression_wrap.cpp pyrfr/regression.i
#10 18.88 unable to execute 'swig': No such file or directory
#10 18.88 error: command 'swig' failed with exit status 1
#10 18.88 [end of output]
#10 18.88
#10 18.88 note: This error originates from a subprocess, and is likely not a problem with pip.
#10 18.88 error: legacy-install-failure
#10 18.88
#10 18.88 × Encountered error while trying to install package.
#10 18.88 ╰─> pyrfr
Related
i'm trying to run my docker image and to start my container but i'm getting an error:
this is my DockerFile:
FROM artifactory...../xxx_docker-local/xxx_java_maven:11
COPY settings/conf /application/conf
WORKDIR /application/conf
RUN ls -lrth
COPY settings/front /application/front
COPY settings/scripts /application/scripts
WORKDIR /application/scripts/
RUN ls -lrth
COPY application/target/xxx-application.jar /application/service/xxx-application.jar
WORKDIR /application/scripts/
RUN chmod +x *.sh
EXPOSE 9420
RUN pwd
ENTRYPOINT ["xxx_application_start.sh"]
after generating the image, i tried to run it but i got this error
WARNING: The requested image's platform (linux/amd64) does not match the detected
host platform (linux/arm64/v8) and no specific platform was requested
docker: Error response from daemon: failed to create shim task: OCI runtime create
failed: runc create failed: unable to start container process: exec:
"xxx_application_start.sh": executable file not found in $PATH: unknown.
ERRO[0000] error waiting for container: context canceled
i updated my ENTRYPOINT by using the path to the file and i got an error also
ENTRYPOINT [/application/scripts/trails_application_start.sh"]
the error here is
WARNING: The requested image's platform (linux/amd64) does not match the detected
host platform (linux/arm64/v8) and no specific platform was requested
exec /application/scripts/xxx_application_start.sh: no such file or
directory
this is the output of docker build
> #14 [10/13] WORKDIR /application/scripts/
#14 sha256:a57bf9c86907fb870c9af30bf81067acda86b224f2d5145027463aa929d2e115
#14 DONE 0.0s
#15 [11/13] RUN chmod +x *.sh
#15 sha256:954310bda76055d5682d752342d69aac231e09f8b9f6be7ad59a8611c6d0538b
#15 DONE 0.2s
#16 [12/13] RUN ls -lrth
#16 sha256:c1c4351c5aa33a313b980994c6fadf0d5ad15b3997bf5d59d9f547c946ba8992
#16 0.174 total 24K
#16 0.174 -rwxrwxr-x 1 root root 1.1K Jan 18 14:03 xxx_application_stop.sh
#16 0.174 -rwxrwxr-x 1 root root 2.2K Jan 18 15:52 xxx_application_start.sh
#16 DONE 0.2s
#17 [13/13] RUN pwd
#17 sha256:c9594028015f207f90cea0e4c4f8bb94b83ee608b238ab2b970d6a4003053da8
#17 0.330 /application/scripts
#17 DONE 0.3s
#18 exporting to image
#18 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00
#18 exporting layers
#18 exporting layers 0.3s done
#18 writing image sha256:8b8d1a822756e236a1fa7c729cbc25e386484b13fee6b8efed876ee561b6eb3e done
#18 naming to docker.io/library/image-name:latest done
#18 DONE 0.3s
and these are the two command i'm running:
docker build --no-cache -t image-name:latest -f Dockerfile .
docker run --read-only -p 8080:9420 image-name
any help please ?
While running the docker image you need to tell your host what platform to use as image's platform doesn't match. Try passing --platform linux/amd64 flag with docker run command. Alternatively, you can pass this flag while building the docker image too. I hope that might solve your issue.
I use docker buildx build because currently I need to see how much time does every stage consume.
For instance, this looks good:
#14 [runner 4/11] RUN addgroup --system --gid 1001 nodejs
#14 DONE 0.4s
#15 [deps 5/8] COPY package.json .npmrc ./
#15 DONE 0.3s
#16 [deps 6/8] COPY package-lock.json .npmrc ./
#16 DONE 0.0s
#17 [deps 7/8] RUN echo "//npm.pkg.github.com/:_authToken=***" >> .npmrc
#17 DONE 0.1s
#18 [runner 5/11] RUN adduser --system --uid 1001 nextjs
#18 DONE 0.1s
But sometimes some of the stages lack the consumed time mark:
#8 [deps 2/8] RUN apk add --no-cache libc6-compat
#0 1.680 fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/main/x86_64/APKINDEX.tar.gz
#0 1.856 fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/community/x86_64/APKINDEX.tar.gz
#0 2.171 (1/2) Upgrading musl (1.2.3-r1 -> 1.2.3-r2)
#0 2.188 (2/2) Installing libc6-compat (1.2.3-r2)
#0 2.194 OK: 8 MiB in 17 packages
#8 ...
#10 [runner 3/11] RUN npm install -g http-server
#10 ...
See, these stages end with "ellipsis" ("..."). More than that, the actual logs piece is just cut off. Only some of the first lines are displayed.
What do I do wrong?
How do I make docker buildx display the spent time and not to omit
Okay, that's a normal behaviour. Docker does not hide neither omit nothing.
Instead, docker buildx does stuff in parallel. Which means, different processes produce multiple outputs. These outputs compete on your screen. This looks funny sometimes, see:
//let's pretend some logs happened earlier
//npm ci at "deps" stage #1
#14 [deps 7/7] RUN npm ci
#14 sha256:cb63df9e77a82ef8a8e520cbc17c2d84edf9e2d13e48f7b2bb87cbffcb84d168 12.58MB / 165.14MB 0.6s
#14 sha256:cb63df9e77a82ef8a8e520cbc17c2d84edf9e2d13e48f7b2bb87cbffcb84d168 22.02MB / 165.14MB 0.9s
#14 sha256:cb63df9e77a82ef8a8e520cbc17c2d84edf9e2d13e48f7b2bb87cbffcb84d168 35.65MB / 165.14MB 1.4s
#14 ... //here it is broken to let other streams to display
//another stage! called "builder"
#13 [builder 2/6] WORKDIR /app
#13 extracting sha256:6fb1b25da51088e0fd1776f0b761b6d6b10d19b644e4c0d2a0e7c6d2237d8c65 1.2s done
#13 DONE 1.8s
//it's done immediately
//npm ci #2 - it's still going
#14 [deps 7/7] RUN npm ci
#14 ...
//interrupted again as other things have happened
//yet one more step from the "builder" stage that is done right away
#15 [builder 3/6] COPY . .
#15 DONE 0.8s
//npm ci #3 - yet one more excerpt from "deps" stage
#14 [deps 7/7] RUN npm ci
#14 extracting sha256:7c212e964304cc15fbcf39bdd14d4f36a521a643dd3e51e781813b312de7af4d 0.3s done
#14 extracting sha256:24586cb5165f566576965c06a65122a81074b5e9585cdc76bb77dbaeda2f405e 0.1s done
#14 sha256:cb63df9e77a82ef8a8e520cbc17c2d84edf9e2d13e48f7b2bb87cbffcb84d168 65.01MB / 165.14MB 2.9s
#14 sha256:cb63df9e77a82ef8a8e520cbc17c2d84edf9e2d13e48f7b2bb87cbffcb84d168 77.59MB / 165.14MB 3.3s
#14 sha256:cb63df9e77a82ef8a8e520cbc17c2d84edf9e2d13e48f7b2bb87cbffcb84d168 165.14MB / 165.14MB 6.6s done
#14 DONE 6.9s
//see, it's still completed! and the time is displayed properly. just not in one move, you have to scroll
Thank you all :)
I am building a programm with uses the Prost extension witch should run in a docker container. When i run cargo build --release from the console everything works fine same with cargo run but when I try to do docker build ./ I get this Error:
#15 160.6 error: failed to run custom build command for `prost-build v0.10.4`
#15 160.6
#15 160.6 Caused by:
#15 160.6 process didn't exit successfully: `/web_server/target/release/build/prost-build-b45bc4869a088027/build-script-build` (exit status: 101)
#15 160.6 --- stdout
#15 160.6 cargo:rerun-if-changed=/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.4/third-party/protobuf/cmake
#15 160.6 CMAKE_TOOLCHAIN_FILE_x86_64-unknown-linux-gnu = None
#15 160.6 CMAKE_TOOLCHAIN_FILE_x86_64_unknown_linux_gnu = None
#15 160.6 HOST_CMAKE_TOOLCHAIN_FILE = None
#15 160.6 CMAKE_TOOLCHAIN_FILE = None
#15 160.6 CMAKE_GENERATOR_x86_64-unknown-linux-gnu = None
#15 160.6 CMAKE_GENERATOR_x86_64_unknown_linux_gnu = None
#15 160.6 HOST_CMAKE_GENERATOR = None
#15 160.6 CMAKE_GENERATOR = None
#15 160.6 CMAKE_PREFIX_PATH_x86_64-unknown-linux-gnu = None
#15 160.6 CMAKE_PREFIX_PATH_x86_64_unknown_linux_gnu = None
#15 160.6 HOST_CMAKE_PREFIX_PATH = None
#15 160.6 CMAKE_PREFIX_PATH = None
#15 160.6 CMAKE_x86_64-unknown-linux-gnu = None
#15 160.6 CMAKE_x86_64_unknown_linux_gnu = None
#15 160.6 HOST_CMAKE = None
#15 160.6 CMAKE = None
#15 160.6 running: "cmake" "/usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/prost-build-0.10.4/third-party/protobuf/cmake" "-Dprotobuf_BUILD_TESTS=OFF" "-DCMAKE_INSTALL_PREFIX=/web_server/target/release/build/prost-build-5d68a19605f74072/out" "-DCMAKE_C_FLAGS= -ffunction-sections -fdata-sections -fPIC -m64" "-DCMAKE_C_COMPILER=/usr/bin/cc" "-DCMAKE_CXX_FLAGS= -ffunction-sections -fdata-sections -fPIC -m64" "-DCMAKE_CXX_COMPILER=/usr/bin/c++" "-DCMAKE_ASM_FLAGS= -ffunction-sections -fdata-sections -fPIC -m64" "-DCMAKE_ASM_COMPILER=/usr/bin/cc" "-DCMAKE_BUILD_TYPE=Debug"
#15 160.6
#15 160.6 --- stderr
#15 160.6 thread 'main' panicked at '
#15 160.6 failed to execute command: No such file or directory (os error 2)
#15 160.6 is `cmake` not installed?
#15 160.6
#15 160.6 build script failed, must exit now', /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/cmake-0.1.48/src/lib.rs:975:5
#15 160.6 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
#15 160.6 warning: build failed, waiting for other jobs to finish...
------
executor failed running [/bin/sh -c cargo build --release]: exit code: 101
My Dockerfile:
FROM rust:1.62 as build
RUN USER=root cargo new --bin web_server
WORKDIR /web_server
COPY ./Cargo.lock ./Cargo.lock
COPY ./Cargo.toml ./Cargo.toml
COPY ./src ./src
COPY build.rs build.rs
COPY ./proto ./proto
RUN cargo build --release
FROM debian:buster-slim
COPY --from=build /web_server/target/release/web_server .
CMD ["./web_server"]
I'm on Windows 10 and work from VS Code and i do have Cmake istalled an pathed.
Anybody know what i might have done wrong inside the Dockerfile?
From your stderr Docker build logs, you need to install cmake dependency during the image build.
#15 160.6 --- stderr
#15 160.6 thread 'main' panicked at '
#15 160.6 failed to execute command: No such file or directory (os error 2)
#15 160.6 is `cmake` not installed?
rust:1.62 is ubuntu based, so installing that dependency would look like:
RUN apt-get install -y cmake && cargo build --release
Depending on your other build dependencies, you may need to install other packages.
Some motivation why this is needed, based on your comment:
I'm on Windows 10 and work from VS Code and i do have Cmake istalled an pathed. Anybody know what i might have done wrong inside the Dockerfile?
You may have cmake locally installed but it does not look like you have cmake installed while the image is being built. The Docker image build environment is somewhat "sandboxed", it doesn't have access to any locally installed programs like your local cmake. You can read more about Docker image concepts here: https://docs.docker.com/get-started/overview/#images
I am running into a problem with buildkit and I cannot figure out which is the reason.
I have one Dockerfile using as base image sles OS and it tries to do some package installation via zypper. Everytime this step is executed, not cached, it takes years to complete.
This is a dummy Dockerfile for verification of this issue.
# syntax=docker/dockerfile:1.3
FROM registry.suse.com/suse/sles12sp4
RUN zypper search iproute2
This is execution when I enable Buildkit:
docker build --no-cache --progress=plain --pull -t test_zypper .
#1 [internal] load build definition from Dockerfile
#1 sha256:1e8bc50247fba08161184996db9e2b6bca36c339623376a360765244d9d3ed8b
#1 transferring dockerfile: 202B done
#1 DONE 0.0s
#2 [internal] load .dockerignore
#2 sha256:bfa4297d1f77b21d1d84347ff3f9c338cef560c9f5c8ef8f6843338b88a83178
#2 transferring context: 2B done
#2 DONE 0.0s
#3 resolve image config for docker.io/docker/dockerfile:1.3
#3 sha256:4fcd28d33487ad029eab28c03869fd56295f3902c713674c129a438f7a780653
#3 DONE 1.1s
#4 docker-image://docker.io/docker/dockerfile:1.3#sha256:42399d4635eddd7a9b8a24be879d2f9a930d0ed040a61324cfdf59ef1357b3b2
#4 sha256:7862c1373501a4a9cd96ccd04641bb1d96c86d034546e74fe74585e3dd12f952
#4 CACHED
#5 [internal] load build definition from Dockerfile
#5 sha256:adf8dd6b4b2604f820e4a4112252c8bfd5984ffa809d1fc7c5330e387575a53d
#5 DONE 0.0s
#6 [internal] load .dockerignore
#6 sha256:59c105584afe8ac8255febcea4650f6e8891b4b14fcdd7b93254039769df3828
#6 DONE 0.0s
#7 [internal] load metadata for registry.suse.com/suse/sles12sp4:latest
#7 sha256:30c143f62f5a593ad20fd34265d2933e13da97368f12f3e0c990b52851933dff
#7 DONE 0.5s
#8 [1/2] FROM registry.suse.com/suse/sles12sp4#sha256:06390bd3b9903f3d4bb1345deb7fc35e18af73de0263d0f4d5c619267bee2adf
#8 sha256:3d15a7aaf66ed6810de2347b0da9787e5a57b9c536d85ccc4b01e9eb5831bcc1
#8 CACHED
#9 [2/2] RUN zypper search iproute2
#9 sha256:17060fcd75740edd49881abc4d1b5a4f7de80f59cde5b2b6f32e97ff02bbc29d
#9 377.9 Refreshing service 'container-suseconnect-zypp'.
#9 556.7 Problem retrieving the repository index file for service 'container-suseconnect-zypp':
#9 556.7 [container-suseconnect-zypp|file:/usr/lib/zypp/plugins/services/container-suseconnect-zypp]
#9 556.7 Warning: Skipping service 'container-suseconnect-zypp' because of the above error.
#9 556.7 Loading repository data...
#9 556.7 Warning: No repositories defined. Operating only with the installed resolvables. Nothing can be installed.
#9 556.7 Reading installed packages...
#9 556.7 No matching items found.
#9 ERROR: executor failed running [/bin/sh -c zypper search iproute2]: exit code: 104
------
> [2/2] RUN zypper search iproute2:
------
executor failed running [/bin/sh -c zypper search iproute2]: exit code: 104
This is execution when I don't enable Buildkit:
time docker build --no-cache --progress=plain --pull -t test_zypper .
Sending build context to Docker daemon 678.5MB
Step 1/2 : FROM registry.suse.com/suse/sles12sp4
latest: Pulling from suse/sles12sp4
Digest: sha256:06390bd3b9903f3d4bb1345deb7fc35e18af73de0263d0f4d5c619267bee2adf
Status: Image is up to date for registry.suse.com/suse/sles12sp4:latest
---> 3126dff9c7fd
Step 2/2 : RUN zypper search iproute2
---> Running in 3efe8a741628
Refreshing service 'container-suseconnect-zypp'.
Problem retrieving the repository index file for service 'container-suseconnect-zypp':
[container-suseconnect-zypp|file:/usr/lib/zypp/plugins/services/container-suseconnect-zypp]
Warning: Skipping service 'container-suseconnect-zypp' because of the above error.
Loading repository data...
Warning: No repositories defined. Operating only with the installed resolvables. Nothing can be installed.
Reading installed packages...
No matching items found.
The command '/bin/sh -c zypper search iproute2' returned a non-zero code: 104
real 0m23.972s
user 0m1.987s
sys 0m2.161s
It is not a problem of not having repositories as in my original Dockerfile it is all defined and it eventually works, but taking 20min or more each zypper command.
Is something wrong in my way to use buildkit??
Thanks in advance!
I am trying to take advantage of the caching/pulling system of BUILDKIT for Docker for my CI/CD process. But it does not work as expected.
I created a dummy local example (but the same happens also in my CI system - AWS CodePipeline, and for both DockerHub and AWS ECR).
The Dockerfile:
# base image
FROM python:3.7-slim
# set working directory
WORKDIR /usr/src/app
# add and install requirements
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip $PIP_PROXY install --no-cache-dir --compile -r requirements.txt
RUN echo 123
# add app
COPY ./run_test.py /usr/src/app/run_test.py
# run server
CMD ["python", "run_test.py"]
run_test.py is actually not interesting, but here is the code just in case:
import requests
import time
while True:
time.sleep(1)
print(requests)
Also you need to create an empty requirements.txt file in the same folder.
In advance, I export two environment variables:
export DOCKER_BUILDKIT=1 # to activate buildkit
export DUMMY_IMAGE_URL=bi0max/test_docker
Then, to test I have the following command. First two commands remove local cache to resemble the CI environment, then build and push.
BE CAREFUL, CODE BELOW REMOVES LOCAL BUILD CACHE:
docker builder prune -a -f && \
(docker image rm $DUMMY_IMAGE_URL:latest || true) && \
docker build \
--cache-from $DUMMY_IMAGE_URL:latest \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--tag $DUMMY_IMAGE_URL:latest "." && \
docker push $DUMMY_IMAGE_URL:latest
As expected, the first run just builds everything from scratch:
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 434B done
#2 DONE 0.0s
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.1s
#3 [internal] load metadata for docker.io/library/python:3.7-slim
#3 DONE 0.0s
#12 [1/7] FROM docker.io/library/python:3.7-slim
#12 DONE 0.0s
#7 [internal] load build context
#7 DONE 0.0s
#4 importing cache manifest from bi0max/test_docker:latest
#4 ERROR: docker.io/bi0max/test_docker:latest not found
#12 [1/7] FROM docker.io/library/python:3.7-slim
#12 resolve docker.io/library/python:3.7-slim done
#12 DONE 0.0s
#7 [internal] load build context
#7 transferring context: 204B done
#7 DONE 0.1s
#5 [2/7] WORKDIR /usr/src/app
#5 DONE 0.0s
#6 [3/7] RUN pip install --upgrade pip
#6 1.951 Requirement already up-to-date: pip in /usr/local/lib/python3.7/site-packages (20.1.1)
#6 DONE 2.3s
#8 [4/7] COPY ./requirements.txt /usr/src/app/requirements.txt
#8 DONE 0.0s
#9 [5/7] RUN pip $PIP_PROXY install --no-cache-dir --compile -r requirement...
#9 0.750 Collecting requests==2.22.0
#9 0.848 Downloading requests-2.22.0-py2.py3-none-any.whl (57 kB)
#9 0.932 Collecting idna<2.9,>=2.5
#9 0.948 Downloading idna-2.8-py2.py3-none-any.whl (58 kB)
#9 0.995 Collecting chardet<3.1.0,>=3.0.2
#9 1.011 Downloading chardet-3.0.4-py2.py3-none-any.whl (133 kB)
#9 1.135 Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1
#9 1.153 Downloading urllib3-1.25.9-py2.py3-none-any.whl (126 kB)
#9 1.264 Collecting certifi>=2017.4.17
#9 1.282 Downloading certifi-2020.4.5.1-py2.py3-none-any.whl (157 kB)
#9 1.378 Installing collected packages: idna, chardet, urllib3, certifi, requests
#9 1.916 Successfully installed certifi-2020.4.5.1 chardet-3.0.4 idna-2.8 requests-2.22.0 urllib3-1.25.9
#9 DONE 2.2s
#10 [6/7] RUN echo 123
#10 0.265 123
#10 DONE 0.3s
#11 [7/7] COPY ./run_test.py /usr/src/app/run_test.py
#11 DONE 0.0s
#13 exporting to image
#13 exporting layers done
#13 writing image sha256:f98327afae246096725f7e54742fe9b25079f1b779699b099e66c8def1e19052 done
#13 naming to docker.io/bi0max/test_docker:latest done
#13 DONE 0.0s
#14 exporting cache
#14 preparing build cache for export done
#14 DONE 0.0s
Then, I slightly adjust run_test.py file and the result is again as expected. All the layers until the last step ([7/7] COPY) are downloaded from repository and reused.
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 434B done
#1 DONE 0.1s
#3 [internal] load metadata for docker.io/library/python:3.7-slim
#3 DONE 0.0s
#8 [internal] load build context
#8 DONE 0.0s
#4 [1/7] FROM docker.io/library/python:3.7-slim
#4 DONE 0.0s
#5 importing cache manifest from bi0max/test_docker:latest
#5 DONE 1.2s
#8 [internal] load build context
#8 transferring context: 193B done
#8 DONE 0.0s
#6 [2/7] WORKDIR /usr/src/app
#6 CACHED
#7 [3/7] RUN pip install --upgrade pip
#7 CACHED
#9 [4/7] COPY ./requirements.txt /usr/src/app/requirements.txt
#9 CACHED
#10 [5/7] RUN pip $PIP_PROXY install --no-cache-dir --compile -r requirement...
#10 CACHED
#11 [6/7] RUN echo 123
#11 pulling sha256:79fc69c08b391d082b4d2617faed489d220444fa0cf06953cdff55c667866bed
#11 pulling sha256:071624272167ab4e35a30eb1640cb3f15ced19c6cd10fa1c9d49763372e81c23
#11 pulling sha256:04ed4ecd76e1a110f468eb1a3173bbfa578c6b4c85a6dc82bf4a489ed8b8c54d
#11 pulling sha256:79fc69c08b391d082b4d2617faed489d220444fa0cf06953cdff55c667866bed 0.2s done
#11 pulling sha256:d6406c1ce2dc5e841233ebce164ee469388102cb98f1473adaeca15455d6d797
#11 pulling sha256:071624272167ab4e35a30eb1640cb3f15ced19c6cd10fa1c9d49763372e81c23 0.5s done
#11 pulling sha256:04ed4ecd76e1a110f468eb1a3173bbfa578c6b4c85a6dc82bf4a489ed8b8c54d 0.5s done
#11 pulling sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
#11 pulling sha256:d6406c1ce2dc5e841233ebce164ee469388102cb98f1473adaeca15455d6d797 0.3s done
#11 pulling sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 0.2s done
#11 CACHED
#12 [7/7] COPY ./run_test.py /usr/src/app/run_test.py
#12 DONE 0.0s
#13 exporting to image
#13 exporting layers done
#13 writing image sha256:f37692114f10b9a3646203569a0849af20774651f4aa0f5dc8d6f133fb7ff062 done
#13 naming to docker.io/bi0max/test_docker:latest done
#13 DONE 0.0s
#14 exporting cache
#14 preparing build cache for export done
#14 DONE 0.0s
Now, I change run_test.py again and I would expect docker to do the same thing as last time. But I get the following result, where it build everything from scratch:
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 434B done
#2 DONE 0.0s
#3 [internal] load metadata for docker.io/library/python:3.7-slim
#3 DONE 0.0s
#5 [1/7] FROM docker.io/library/python:3.7-slim
#5 DONE 0.0s
#8 [internal] load build context
#8 DONE 0.0s
#4 importing cache manifest from bi0max/test_docker:latest
#4 DONE 1.7s
#8 [internal] load build context
#8 transferring context: 182B done
#8 DONE 0.0s
#5 [1/7] FROM docker.io/library/python:3.7-slim
#5 resolve docker.io/library/python:3.7-slim done
#5 DONE 0.1s
#6 [2/7] WORKDIR /usr/src/app
#6 DONE 0.0s
#7 [3/7] RUN pip install --upgrade pip
#7 1.774 Requirement already up-to-date: pip in /usr/local/lib/python3.7/site-packages (20.1.1)
#7 DONE 2.1s
#9 [4/7] COPY ./requirements.txt /usr/src/app/requirements.txt
#9 DONE 0.0s
#10 [5/7] RUN pip $PIP_PROXY install --no-cache-dir --compile -r requirement...
#10 0.805 Collecting requests==2.22.0
#10 0.905 Downloading requests-2.22.0-py2.py3-none-any.whl (57 kB)
#10 1.079 Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1
#10 1.109 Downloading urllib3-1.25.9-py2.py3-none-any.whl (126 kB)
#10 1.242 Collecting certifi>=2017.4.17
#10 1.259 Downloading certifi-2020.4.5.1-py2.py3-none-any.whl (157 kB)
#10 1.336 Collecting idna<2.9,>=2.5
#10 1.353 Downloading idna-2.8-py2.py3-none-any.whl (58 kB)
#10 1.410 Collecting chardet<3.1.0,>=3.0.2
#10 1.428 Downloading chardet-3.0.4-py2.py3-none-any.whl (133 kB)
#10 1.545 Installing collected packages: urllib3, certifi, idna, chardet, requests
#10 2.102 Successfully installed certifi-2020.4.5.1 chardet-3.0.4 idna-2.8 requests-2.22.0 urllib3-1.25.9
#10 DONE 2.4s
#11 [6/7] RUN echo 123
#11 0.259 123
#11 DONE 0.3s
#12 [7/7] COPY ./run_test.py /usr/src/app/run_test.py
#12 DONE 0.0s
#13 exporting to image
#13 exporting layers done
#13 writing image sha256:f4ffb0e84e334b4b35fe2504de11012e5dc1ca5978eace055932e9bbbe83c93e done
#13 naming to docker.io/bi0max/test_docker:latest done
#13 DONE 0.0s
#14 exporting cache
#14 preparing build cache for export done
#14 DONE 0.0s
But the strangest thing for me is, when I change run_test.py for the third time, it uses cached layers again. And it continues in the same way: fourth time - doesn't use, fifth time - uses, etc...
Do I miss something here?
If I pull the image each time before building, then it always uses cache, but it also works in the same way without the BUILDKIT.
This issue got fixed in newer docker versions, a simple upgrade resolves the issue.
Otherwise the solution described on GitHub can help to not rely on the systems docker version: https://github.com/moby/buildkit/issues/1981#issuecomment-785534131
I believe the inline cache image becomes invalid (or incomplete) if it was built while reusing the cache. It's either a limitation or a bug.
There is a workaround: you can tag a distinct cache image, that you'll only push to the registry when BuildKit has rebuilt the image. AFAIK there is no mean to know whether BuildKit used the cache or not, but we can see the log is filled with CACHED when it did, so we can reuse it. For example:
# enable buildkit:
$ export DOCKER_BUILDKIT=1
# build image trying to use cache image + build cache image:
$ docker build . \
--tag image:latest \
--tag image:build-cache \
--use-cache-from=image:build-cache \
--build-arg BUILDKIT_INLINE_CACHE = 1 \
| tee docker.log
# push new image to the registry:
docker push image:latest
# trick: only push cache image to the registry if it was rebuilt:
grep -q CACHED docker.log || docker push image:build-cache