i'm trying to run my docker image and to start my container but i'm getting an error:
this is my DockerFile:
FROM artifactory...../xxx_docker-local/xxx_java_maven:11
COPY settings/conf /application/conf
WORKDIR /application/conf
RUN ls -lrth
COPY settings/front /application/front
COPY settings/scripts /application/scripts
WORKDIR /application/scripts/
RUN ls -lrth
COPY application/target/xxx-application.jar /application/service/xxx-application.jar
WORKDIR /application/scripts/
RUN chmod +x *.sh
EXPOSE 9420
RUN pwd
ENTRYPOINT ["xxx_application_start.sh"]
after generating the image, i tried to run it but i got this error
WARNING: The requested image's platform (linux/amd64) does not match the detected
host platform (linux/arm64/v8) and no specific platform was requested
docker: Error response from daemon: failed to create shim task: OCI runtime create
failed: runc create failed: unable to start container process: exec:
"xxx_application_start.sh": executable file not found in $PATH: unknown.
ERRO[0000] error waiting for container: context canceled
i updated my ENTRYPOINT by using the path to the file and i got an error also
ENTRYPOINT [/application/scripts/trails_application_start.sh"]
the error here is
WARNING: The requested image's platform (linux/amd64) does not match the detected
host platform (linux/arm64/v8) and no specific platform was requested
exec /application/scripts/xxx_application_start.sh: no such file or
directory
this is the output of docker build
> #14 [10/13] WORKDIR /application/scripts/
#14 sha256:a57bf9c86907fb870c9af30bf81067acda86b224f2d5145027463aa929d2e115
#14 DONE 0.0s
#15 [11/13] RUN chmod +x *.sh
#15 sha256:954310bda76055d5682d752342d69aac231e09f8b9f6be7ad59a8611c6d0538b
#15 DONE 0.2s
#16 [12/13] RUN ls -lrth
#16 sha256:c1c4351c5aa33a313b980994c6fadf0d5ad15b3997bf5d59d9f547c946ba8992
#16 0.174 total 24K
#16 0.174 -rwxrwxr-x 1 root root 1.1K Jan 18 14:03 xxx_application_stop.sh
#16 0.174 -rwxrwxr-x 1 root root 2.2K Jan 18 15:52 xxx_application_start.sh
#16 DONE 0.2s
#17 [13/13] RUN pwd
#17 sha256:c9594028015f207f90cea0e4c4f8bb94b83ee608b238ab2b970d6a4003053da8
#17 0.330 /application/scripts
#17 DONE 0.3s
#18 exporting to image
#18 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00
#18 exporting layers
#18 exporting layers 0.3s done
#18 writing image sha256:8b8d1a822756e236a1fa7c729cbc25e386484b13fee6b8efed876ee561b6eb3e done
#18 naming to docker.io/library/image-name:latest done
#18 DONE 0.3s
and these are the two command i'm running:
docker build --no-cache -t image-name:latest -f Dockerfile .
docker run --read-only -p 8080:9420 image-name
any help please ?
While running the docker image you need to tell your host what platform to use as image's platform doesn't match. Try passing --platform linux/amd64 flag with docker run command. Alternatively, you can pass this flag while building the docker image too. I hope that might solve your issue.
I am trying to build an image with docker containing some pretty simple packages.
Here's the requirements file:
autoimpute==0.12.2
numpy==1.19.2
fuzzywuzzy==0.18.0
pymongo==3.11.4
boto3==1.18.65
pandas==1.1.3
pytest==0.0.0
scikit_learn==1.0
Dockerfile:
FROM public.ecr.aws/lambda/python:3.8
COPY app.py requirements.txt ./
COPY data data/
COPY models models/
COPY models_autoML models_autoML/
RUN apt install swig
RUN python3.8 -m pip install -r requirements.txt -t .
COPY dsci1.py ./
# Command can be overwritten by providing a different command in the template directly.
CMD ["app.lambda_handler"]
I get the following error, which is weird since I pulled a project from another collaborator and it is supposed to work. Why is it failing, and what can I do to fix it?
#10 18.88 error: subprocess-exited-with-error
#10 18.88
#10 18.88 × Running setup.py install for pyrfr did not run successfully.
#10 18.88 │ exit code: 1
#10 18.88 ╰─> [7 lines of output]
#10 18.88 running install
#10 18.88 running build_ext
#10 18.88 building 'pyrfr._regression' extension
#10 18.88 swigging pyrfr/regression.i to pyrfr/regression_wrap.cpp
#10 18.88 swig -python -c++ -modern -py3 -features nondynamic -I./include -o pyrfr/regression_wrap.cpp pyrfr/regression.i
#10 18.88 unable to execute 'swig': No such file or directory
#10 18.88 error: command 'swig' failed with exit status 1
#10 18.88 [end of output]
#10 18.88
#10 18.88 note: This error originates from a subprocess, and is likely not a problem with pip.
#10 18.88 error: legacy-install-failure
#10 18.88
#10 18.88 × Encountered error while trying to install package.
#10 18.88 ╰─> pyrfr
I am trying to build a docker image for my sample-go app.
I am running it from the sample-app folder itself and using the goland editor's terminal. But the build is failing and giving me certain errors.
My docker file looks like this:
FROM alpine:latest
RUN mkdir -p /src/build
WORKDIR /src/build
RUN apk add --no-cache tzdata ca-certificates
COPY ./configs /configs
COPY main /main
EXPOSE 8000
CMD ["/main"]
command for building:
docker build --no-cache --progress=plain - < Dockerfile
Error And Logs:
#1 [internal] load build definition from Dockerfile
#1 sha256:8bb9ee83603259cf748d90ce42602f12527fa720d7417da22799b2ad4e503497
#1 transferring dockerfile: 222B done
#1 DONE 0.0s
#2 [internal] load .dockerignore
#2 sha256:f93d938488588cd0e0a94d9d343fe69dcfd28d0cb1da95ad7aab00aac50235c3
#2 transferring context: 2B done
#2 DONE 0.0s
#3 [internal] load metadata for docker.io/library/alpine:latest
#3 sha256:13549c58a76bcb5dac9d52bc368a8fb6b5cf7659f94e3fa6294917b85546978d
#3 DONE 0.0s
#10 [1/6] FROM docker.io/library/alpine:latest
#10 sha256:d20daa00e252bfb345a1b4f53b6bb332aafe702d8de5e583a76fcd09ba7ea1c1
#10 CACHED
#7 [internal] load build context
#7 sha256:0f7a8a6082a837c139acc2855e1b745bba9f28cc96709d45cd0b7be42442c0e8
#7 transferring context: 2B done
#7 DONE 0.0s
#4 [2/6] RUN mkdir -p /src/build
#4 sha256:b9fa3007a44471d47414dd29b3ff07ead6af28ede820a2b4bae0ce84cf2c5a83
#4 CACHED
#5 [3/6] WORKDIR /src/build
#5 sha256:b2ec58a365fdd74c4f9030b0caff2e2225eea33617da306678ad037fce675388
#5 CACHED
#6 [4/6] RUN apk add --no-cache tzdata ca-certificates
#6 sha256:0966097abf956d5781bc2330d49cf715cd52c3807e8fedfff07dec50907ff03b
#6 CACHED
#9 [6/6] COPY main /main
#9 sha256:f4b81960427c014a020361bea0903728f289e1d796892fe0adc6409434f3ca76
#9 ERROR: "/main" not found: not found
#8 [5/6] COPY ./configs /configs
#8 sha256:630f272dd60dd307f40dbbdaef277ee0dfc24b71fa11e10a3b8efd64d3c05086
#8 ERROR: "/configs" not found: not found
#4 [2/6] RUN mkdir -p /src/build
#4 sha256:b9fa3007a44471d47414dd29b3ff07ead6af28ede820a2b4bae0ce84cf2c5a83
#4 DONE 0.2s
------
> [5/6] COPY ./configs /configs:
------
------
> [6/6] COPY main /main:
------
failed to compute cache key: "/main" not found: not found
PS: I am not able to find where is the problem? Help Please
The two folders /main and /configs does not exist.
The COPY command can't copy into this folders.
1. Solution
Create the folders on build
RUN mkdir -p /main
RUN mkdir -p /configs
And than use COPY
2. Solution
Try to build without COPY and CMD
Than run the the new image
exec into running container with bash or sh
Create the folders
Exit exec container
Create a new image of the running container with docker run commit
Stop the container and delete it
Build again with your new image and include COPY and CMD
This is a basic mistake.
COPY ./configs /configs: copy the folder configs from the host to the Docker image.
COPY main /main: copy the executable file main from the host to the Docker image.
The problems are:
The base Docker images do not have these folders /configs, /main. You must create them manually (Docker understood your command this way).
But I have some advice:
Create 2 Docker images for 2 purposes: build, production.
Copy the source code into Docker builder image which is use for building your app.
Copy necessary output files from the Docker builder image into the Docker production image.
In my case, the issue was the connected vpn/proxy network from my machine.
It worked after I disconnecting the vpn/proxy network.
In my case I missed the folder entry in .dockerignore file. Do something like that.
**/*
!docker-images
!configs
!main
I am running into a problem with buildkit and I cannot figure out which is the reason.
I have one Dockerfile using as base image sles OS and it tries to do some package installation via zypper. Everytime this step is executed, not cached, it takes years to complete.
This is a dummy Dockerfile for verification of this issue.
# syntax=docker/dockerfile:1.3
FROM registry.suse.com/suse/sles12sp4
RUN zypper search iproute2
This is execution when I enable Buildkit:
docker build --no-cache --progress=plain --pull -t test_zypper .
#1 [internal] load build definition from Dockerfile
#1 sha256:1e8bc50247fba08161184996db9e2b6bca36c339623376a360765244d9d3ed8b
#1 transferring dockerfile: 202B done
#1 DONE 0.0s
#2 [internal] load .dockerignore
#2 sha256:bfa4297d1f77b21d1d84347ff3f9c338cef560c9f5c8ef8f6843338b88a83178
#2 transferring context: 2B done
#2 DONE 0.0s
#3 resolve image config for docker.io/docker/dockerfile:1.3
#3 sha256:4fcd28d33487ad029eab28c03869fd56295f3902c713674c129a438f7a780653
#3 DONE 1.1s
#4 docker-image://docker.io/docker/dockerfile:1.3#sha256:42399d4635eddd7a9b8a24be879d2f9a930d0ed040a61324cfdf59ef1357b3b2
#4 sha256:7862c1373501a4a9cd96ccd04641bb1d96c86d034546e74fe74585e3dd12f952
#4 CACHED
#5 [internal] load build definition from Dockerfile
#5 sha256:adf8dd6b4b2604f820e4a4112252c8bfd5984ffa809d1fc7c5330e387575a53d
#5 DONE 0.0s
#6 [internal] load .dockerignore
#6 sha256:59c105584afe8ac8255febcea4650f6e8891b4b14fcdd7b93254039769df3828
#6 DONE 0.0s
#7 [internal] load metadata for registry.suse.com/suse/sles12sp4:latest
#7 sha256:30c143f62f5a593ad20fd34265d2933e13da97368f12f3e0c990b52851933dff
#7 DONE 0.5s
#8 [1/2] FROM registry.suse.com/suse/sles12sp4#sha256:06390bd3b9903f3d4bb1345deb7fc35e18af73de0263d0f4d5c619267bee2adf
#8 sha256:3d15a7aaf66ed6810de2347b0da9787e5a57b9c536d85ccc4b01e9eb5831bcc1
#8 CACHED
#9 [2/2] RUN zypper search iproute2
#9 sha256:17060fcd75740edd49881abc4d1b5a4f7de80f59cde5b2b6f32e97ff02bbc29d
#9 377.9 Refreshing service 'container-suseconnect-zypp'.
#9 556.7 Problem retrieving the repository index file for service 'container-suseconnect-zypp':
#9 556.7 [container-suseconnect-zypp|file:/usr/lib/zypp/plugins/services/container-suseconnect-zypp]
#9 556.7 Warning: Skipping service 'container-suseconnect-zypp' because of the above error.
#9 556.7 Loading repository data...
#9 556.7 Warning: No repositories defined. Operating only with the installed resolvables. Nothing can be installed.
#9 556.7 Reading installed packages...
#9 556.7 No matching items found.
#9 ERROR: executor failed running [/bin/sh -c zypper search iproute2]: exit code: 104
------
> [2/2] RUN zypper search iproute2:
------
executor failed running [/bin/sh -c zypper search iproute2]: exit code: 104
This is execution when I don't enable Buildkit:
time docker build --no-cache --progress=plain --pull -t test_zypper .
Sending build context to Docker daemon 678.5MB
Step 1/2 : FROM registry.suse.com/suse/sles12sp4
latest: Pulling from suse/sles12sp4
Digest: sha256:06390bd3b9903f3d4bb1345deb7fc35e18af73de0263d0f4d5c619267bee2adf
Status: Image is up to date for registry.suse.com/suse/sles12sp4:latest
---> 3126dff9c7fd
Step 2/2 : RUN zypper search iproute2
---> Running in 3efe8a741628
Refreshing service 'container-suseconnect-zypp'.
Problem retrieving the repository index file for service 'container-suseconnect-zypp':
[container-suseconnect-zypp|file:/usr/lib/zypp/plugins/services/container-suseconnect-zypp]
Warning: Skipping service 'container-suseconnect-zypp' because of the above error.
Loading repository data...
Warning: No repositories defined. Operating only with the installed resolvables. Nothing can be installed.
Reading installed packages...
No matching items found.
The command '/bin/sh -c zypper search iproute2' returned a non-zero code: 104
real 0m23.972s
user 0m1.987s
sys 0m2.161s
It is not a problem of not having repositories as in my original Dockerfile it is all defined and it eventually works, but taking 20min or more each zypper command.
Is something wrong in my way to use buildkit??
Thanks in advance!
I am trying to take advantage of the caching/pulling system of BUILDKIT for Docker for my CI/CD process. But it does not work as expected.
I created a dummy local example (but the same happens also in my CI system - AWS CodePipeline, and for both DockerHub and AWS ECR).
The Dockerfile:
# base image
FROM python:3.7-slim
# set working directory
WORKDIR /usr/src/app
# add and install requirements
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip $PIP_PROXY install --no-cache-dir --compile -r requirements.txt
RUN echo 123
# add app
COPY ./run_test.py /usr/src/app/run_test.py
# run server
CMD ["python", "run_test.py"]
run_test.py is actually not interesting, but here is the code just in case:
import requests
import time
while True:
time.sleep(1)
print(requests)
Also you need to create an empty requirements.txt file in the same folder.
In advance, I export two environment variables:
export DOCKER_BUILDKIT=1 # to activate buildkit
export DUMMY_IMAGE_URL=bi0max/test_docker
Then, to test I have the following command. First two commands remove local cache to resemble the CI environment, then build and push.
BE CAREFUL, CODE BELOW REMOVES LOCAL BUILD CACHE:
docker builder prune -a -f && \
(docker image rm $DUMMY_IMAGE_URL:latest || true) && \
docker build \
--cache-from $DUMMY_IMAGE_URL:latest \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--tag $DUMMY_IMAGE_URL:latest "." && \
docker push $DUMMY_IMAGE_URL:latest
As expected, the first run just builds everything from scratch:
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 434B done
#2 DONE 0.0s
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.1s
#3 [internal] load metadata for docker.io/library/python:3.7-slim
#3 DONE 0.0s
#12 [1/7] FROM docker.io/library/python:3.7-slim
#12 DONE 0.0s
#7 [internal] load build context
#7 DONE 0.0s
#4 importing cache manifest from bi0max/test_docker:latest
#4 ERROR: docker.io/bi0max/test_docker:latest not found
#12 [1/7] FROM docker.io/library/python:3.7-slim
#12 resolve docker.io/library/python:3.7-slim done
#12 DONE 0.0s
#7 [internal] load build context
#7 transferring context: 204B done
#7 DONE 0.1s
#5 [2/7] WORKDIR /usr/src/app
#5 DONE 0.0s
#6 [3/7] RUN pip install --upgrade pip
#6 1.951 Requirement already up-to-date: pip in /usr/local/lib/python3.7/site-packages (20.1.1)
#6 DONE 2.3s
#8 [4/7] COPY ./requirements.txt /usr/src/app/requirements.txt
#8 DONE 0.0s
#9 [5/7] RUN pip $PIP_PROXY install --no-cache-dir --compile -r requirement...
#9 0.750 Collecting requests==2.22.0
#9 0.848 Downloading requests-2.22.0-py2.py3-none-any.whl (57 kB)
#9 0.932 Collecting idna<2.9,>=2.5
#9 0.948 Downloading idna-2.8-py2.py3-none-any.whl (58 kB)
#9 0.995 Collecting chardet<3.1.0,>=3.0.2
#9 1.011 Downloading chardet-3.0.4-py2.py3-none-any.whl (133 kB)
#9 1.135 Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1
#9 1.153 Downloading urllib3-1.25.9-py2.py3-none-any.whl (126 kB)
#9 1.264 Collecting certifi>=2017.4.17
#9 1.282 Downloading certifi-2020.4.5.1-py2.py3-none-any.whl (157 kB)
#9 1.378 Installing collected packages: idna, chardet, urllib3, certifi, requests
#9 1.916 Successfully installed certifi-2020.4.5.1 chardet-3.0.4 idna-2.8 requests-2.22.0 urllib3-1.25.9
#9 DONE 2.2s
#10 [6/7] RUN echo 123
#10 0.265 123
#10 DONE 0.3s
#11 [7/7] COPY ./run_test.py /usr/src/app/run_test.py
#11 DONE 0.0s
#13 exporting to image
#13 exporting layers done
#13 writing image sha256:f98327afae246096725f7e54742fe9b25079f1b779699b099e66c8def1e19052 done
#13 naming to docker.io/bi0max/test_docker:latest done
#13 DONE 0.0s
#14 exporting cache
#14 preparing build cache for export done
#14 DONE 0.0s
Then, I slightly adjust run_test.py file and the result is again as expected. All the layers until the last step ([7/7] COPY) are downloaded from repository and reused.
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 434B done
#1 DONE 0.1s
#3 [internal] load metadata for docker.io/library/python:3.7-slim
#3 DONE 0.0s
#8 [internal] load build context
#8 DONE 0.0s
#4 [1/7] FROM docker.io/library/python:3.7-slim
#4 DONE 0.0s
#5 importing cache manifest from bi0max/test_docker:latest
#5 DONE 1.2s
#8 [internal] load build context
#8 transferring context: 193B done
#8 DONE 0.0s
#6 [2/7] WORKDIR /usr/src/app
#6 CACHED
#7 [3/7] RUN pip install --upgrade pip
#7 CACHED
#9 [4/7] COPY ./requirements.txt /usr/src/app/requirements.txt
#9 CACHED
#10 [5/7] RUN pip $PIP_PROXY install --no-cache-dir --compile -r requirement...
#10 CACHED
#11 [6/7] RUN echo 123
#11 pulling sha256:79fc69c08b391d082b4d2617faed489d220444fa0cf06953cdff55c667866bed
#11 pulling sha256:071624272167ab4e35a30eb1640cb3f15ced19c6cd10fa1c9d49763372e81c23
#11 pulling sha256:04ed4ecd76e1a110f468eb1a3173bbfa578c6b4c85a6dc82bf4a489ed8b8c54d
#11 pulling sha256:79fc69c08b391d082b4d2617faed489d220444fa0cf06953cdff55c667866bed 0.2s done
#11 pulling sha256:d6406c1ce2dc5e841233ebce164ee469388102cb98f1473adaeca15455d6d797
#11 pulling sha256:071624272167ab4e35a30eb1640cb3f15ced19c6cd10fa1c9d49763372e81c23 0.5s done
#11 pulling sha256:04ed4ecd76e1a110f468eb1a3173bbfa578c6b4c85a6dc82bf4a489ed8b8c54d 0.5s done
#11 pulling sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
#11 pulling sha256:d6406c1ce2dc5e841233ebce164ee469388102cb98f1473adaeca15455d6d797 0.3s done
#11 pulling sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 0.2s done
#11 CACHED
#12 [7/7] COPY ./run_test.py /usr/src/app/run_test.py
#12 DONE 0.0s
#13 exporting to image
#13 exporting layers done
#13 writing image sha256:f37692114f10b9a3646203569a0849af20774651f4aa0f5dc8d6f133fb7ff062 done
#13 naming to docker.io/bi0max/test_docker:latest done
#13 DONE 0.0s
#14 exporting cache
#14 preparing build cache for export done
#14 DONE 0.0s
Now, I change run_test.py again and I would expect docker to do the same thing as last time. But I get the following result, where it build everything from scratch:
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 434B done
#2 DONE 0.0s
#3 [internal] load metadata for docker.io/library/python:3.7-slim
#3 DONE 0.0s
#5 [1/7] FROM docker.io/library/python:3.7-slim
#5 DONE 0.0s
#8 [internal] load build context
#8 DONE 0.0s
#4 importing cache manifest from bi0max/test_docker:latest
#4 DONE 1.7s
#8 [internal] load build context
#8 transferring context: 182B done
#8 DONE 0.0s
#5 [1/7] FROM docker.io/library/python:3.7-slim
#5 resolve docker.io/library/python:3.7-slim done
#5 DONE 0.1s
#6 [2/7] WORKDIR /usr/src/app
#6 DONE 0.0s
#7 [3/7] RUN pip install --upgrade pip
#7 1.774 Requirement already up-to-date: pip in /usr/local/lib/python3.7/site-packages (20.1.1)
#7 DONE 2.1s
#9 [4/7] COPY ./requirements.txt /usr/src/app/requirements.txt
#9 DONE 0.0s
#10 [5/7] RUN pip $PIP_PROXY install --no-cache-dir --compile -r requirement...
#10 0.805 Collecting requests==2.22.0
#10 0.905 Downloading requests-2.22.0-py2.py3-none-any.whl (57 kB)
#10 1.079 Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1
#10 1.109 Downloading urllib3-1.25.9-py2.py3-none-any.whl (126 kB)
#10 1.242 Collecting certifi>=2017.4.17
#10 1.259 Downloading certifi-2020.4.5.1-py2.py3-none-any.whl (157 kB)
#10 1.336 Collecting idna<2.9,>=2.5
#10 1.353 Downloading idna-2.8-py2.py3-none-any.whl (58 kB)
#10 1.410 Collecting chardet<3.1.0,>=3.0.2
#10 1.428 Downloading chardet-3.0.4-py2.py3-none-any.whl (133 kB)
#10 1.545 Installing collected packages: urllib3, certifi, idna, chardet, requests
#10 2.102 Successfully installed certifi-2020.4.5.1 chardet-3.0.4 idna-2.8 requests-2.22.0 urllib3-1.25.9
#10 DONE 2.4s
#11 [6/7] RUN echo 123
#11 0.259 123
#11 DONE 0.3s
#12 [7/7] COPY ./run_test.py /usr/src/app/run_test.py
#12 DONE 0.0s
#13 exporting to image
#13 exporting layers done
#13 writing image sha256:f4ffb0e84e334b4b35fe2504de11012e5dc1ca5978eace055932e9bbbe83c93e done
#13 naming to docker.io/bi0max/test_docker:latest done
#13 DONE 0.0s
#14 exporting cache
#14 preparing build cache for export done
#14 DONE 0.0s
But the strangest thing for me is, when I change run_test.py for the third time, it uses cached layers again. And it continues in the same way: fourth time - doesn't use, fifth time - uses, etc...
Do I miss something here?
If I pull the image each time before building, then it always uses cache, but it also works in the same way without the BUILDKIT.
This issue got fixed in newer docker versions, a simple upgrade resolves the issue.
Otherwise the solution described on GitHub can help to not rely on the systems docker version: https://github.com/moby/buildkit/issues/1981#issuecomment-785534131
I believe the inline cache image becomes invalid (or incomplete) if it was built while reusing the cache. It's either a limitation or a bug.
There is a workaround: you can tag a distinct cache image, that you'll only push to the registry when BuildKit has rebuilt the image. AFAIK there is no mean to know whether BuildKit used the cache or not, but we can see the log is filled with CACHED when it did, so we can reuse it. For example:
# enable buildkit:
$ export DOCKER_BUILDKIT=1
# build image trying to use cache image + build cache image:
$ docker build . \
--tag image:latest \
--tag image:build-cache \
--use-cache-from=image:build-cache \
--build-arg BUILDKIT_INLINE_CACHE = 1 \
| tee docker.log
# push new image to the registry:
docker push image:latest
# trick: only push cache image to the registry if it was rebuilt:
grep -q CACHED docker.log || docker push image:build-cache