C++ and .Net core not building in single Dockerfile - docker

I have following dockerfile. It is compiling c++ as well as .net projects.
When I add .net code C++ layers are not working. If I remove .Net Layers then C++ Layers are working.
Is this something can't be done in single file?
# GCC support can be specified at major, minor, or micro version
# (e.g. 8, 8.2 or 8.2.0).
# See https://hub.docker.com/r/library/gcc/ for all supported GCC
# tags from Docker Hub.
# See https://docs.docker.com/samples/library/gcc/ for more on how to use this image
FROM gcc:latest
RUN apt-get update
RUN apt-get install -y libmotif-dev
# These commands copy your files into the specified directory in the image
# and set that as the working location
COPY . /usr/src/myapp
COPY ca-8-5-5-linux-x86-64/redist /usr/src/myapp/ca-8-5-5-linux-x86-64/sdk/demo
WORKDIR /usr/src/myapp/ca-8-5-5-linux-x86-64/sdk/samplecode/unix/
# This command compiles your app using GCC, adjust for your source code
RUN make
###############################################################################################
NEED HELP HERE. ABOVE IS NOT WORKING. IF I REMOVE FOLLOWING THEN ABOVE WORKS.
###############################################################################################
FROM mcr.microsoft.com/dotnet/runtime:6.0-focal AS base
WORKDIR /app
# Creates a non-root user with an explicit UID and adds permission to access the /app folder
# For more info, please refer to https://aka.ms/vscode-docker-dotnet-configure-containers
RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app
USER appuser
COPY . .
FROM mcr.microsoft.com/dotnet/sdk:6.0-focal AS build
WORKDIR /src
COPY ["sdk/samplecode/myAppExport/myAppExport.csproj", "./"]
RUN dotnet restore "myAppExport.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "sdk/samplecode/myAppExport/myAppExport.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "sdk/samplecode/myAppExport/myAppExport.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "myAppExport.dll"]
Following is the log I am having when both "Make" and dotnet build is together.
[+] Building 19.4s (20/20) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.70kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 35B 0.0s
=> [internal] load metadata for mcr.microsoft.com/dotnet/sdk:6.0-focal 0.5s
=> [internal] load metadata for mcr.microsoft.com/dotnet/runtime:6.0-focal 0.4s
=> [build 1/7] FROM mcr.microsoft.com/dotnet/sdk:6.0-focal#sha256:213bd9012c064e2e80c9ffc17e4e4ebd97fe01a232370af2eab31ecf4c773fcb 0.0s
=> [base 1/4] FROM mcr.microsoft.com/dotnet/runtime:6.0-focal#sha256:9adcd9a2eee0506f461f81226bea8d725c5111809f1afcd12534c523f6406665 0.0s
=> => transferring context: 172.90MB 3.3s
=> CACHED [base 2/4] WORKDIR /app 0.0s
=> CACHED [base 3/4] RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app 0.0s
=> CACHED [build 2/7] WORKDIR /src 0.0s
=> CACHED [build 3/7] COPY [sdk/samplecode/myAppExport/myAppExport.csproj, ./] 0.0s
=> CACHED [build 4/7] RUN dotnet restore "myAppExport.csproj" 0.0s
=> [build 5/7] COPY . . 5.6s
=> [base 4/4] COPY . . 5.6s
=> [final 1/2] WORKDIR /app 0.1s
=> [build 6/7] WORKDIR /src/. 0.1s
=> [build 7/7] RUN dotnet build "sdk/samplecode/myAppExport/myAppExport.csproj" -c Release -o /app/build 5.5s
=> [publish 1/1] RUN dotnet publish "sdk/samplecode/myAppExport/myAppExport.csproj" -c Release -o /app/publish 2.3s
=> [final 2/2] COPY --from=publish /app/publish . 0.1s
=> exporting to image 1.8s
=> => exporting layers 1.8s
=> => writing image sha256:96304565f390cf4ba352b792e6fb93c96832cdb41b56cd465feacf34f3f5005c 0.0s
=> => naming to docker.io/library/myAppextract:1 0.0s
Following is the logs when only gcc part is in dockerfile (removed dotnet part from dockerfile).
[+] Building 10.8s (12/12) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 1.74kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 35B 0.0s
=> [internal] load metadata for docker.io/library/gcc:latest 3.5s
=> [1/7] FROM docker.io/library/gcc:latest#sha256:084eaedf8e3c51f3db939ad7ed2b1455ff9ce4705845a014fb9fe5577b35c901 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 54.94kB 0.1s
=> CACHED [2/7] RUN apt-get update 0.0s
=> CACHED [3/7] RUN apt-get install -y libmotif-dev 0.0s
=> [4/7] COPY . /usr/src/myapp 2.8s
=> [5/7] COPY ca-8-5-5-linux-x86-64/redist /usr/src/myapp/ca-8-5-5-linux-x86-64/sdk/demo 0.6s
=> [6/7] WORKDIR /usr/src/myapp/ca-8-5-5-linux-x86-64/sdk/samplecode/unix/ 0.1s
=> [7/7] RUN make 1.9s
=> exporting to image 1.7s
=> => exporting layers 1.6s
=> => writing image sha256:6904800ca9760db29751820b15f8952213ad084955f62c03ef98b6723b484420 0.0s
=> => naming to docker.io/library/myappextract:1

Actually I changed approach to the problem.
instead of doing FROM gcc:latest I added build-essential in following.
RUN apt-get install -y libmotif-dev build-essential
and that installed all required gcc and make dependencies and things are working as expected now.

Related

Docker cross compile build context leads to `dockerfile.v0: unsupported frontend capability moby.buildkit.frontend.contexts`

I'm trying to cross compile a rust application for my raspberry pi (the compilation there is very slow).
For that I try to execute a Dockerfile with a build context somewhere else (because there are some certificates and other things, which are needed in the Docker image).
Dockerfile (./myapp/Dockerfile)
FROM rust
RUN apt-get update && apt-get install -y pkg-config libssl-dev build-essential cmake
WORKDIR /home/myapp
COPY --from=local ./myapp/. .
COPY --from=local ./mqtt-helper/ /home/mqtt-helper/
COPY --from=local ./mqtt-broker/config/certs/ca.crt ./certs/
COPY --from=local ./mqtt-broker/config/certs/mqtt-subscriber.key ./certs/
COPY --from=local ./mqtt-broker/config/certs/mqtt-subscriber.crt ./certs/
ENV TZ=Europe/Berlin
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN cargo install --path .
EXPOSE 8080
CMD ["myapp"]
Now I'm trying to run:
docker buildx build --platform linux/arm64 --build-context local=./ ./myapp/
But this call always leads into:
[+] Building 0.0s (2/2) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
ERROR: failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: unsupported frontend capability moby.buildkit.frontend.contexts
thank you

Docker buildx failing to find local docker image

I've got two Docker images that I need to cross-compile: libs and devicemanager, where the devicemanager image depends on the libs image. The libs Docker image builds fine using
$ docker buildx build --platform linux/arm/v7 --rm --file ./Dockerfile --tag libs:latest --load ..
however, when I try to build the devicemanager image using
$ docker buildx build --platform linux/arm/v7 --rm --file ./Dockerfile --tag devicemanager:latest --load ..
I get the following error:
[+] Building 0.5s (4/4) FINISHED
=> [internal] load build definition from Dockerfile 0.2s
=> => transferring dockerfile: 2.18kB 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 164B 0.0s
=> CANCELED [internal] load metadata for docker.io/library/alpine:3.16.0 0.2s
=> ERROR [internal] load metadata for docker.io/library/libs:latest 0.2s
------
> [internal] load metadata for docker.io/library/libs:latest:
------
Dockerfile:2
--------------------
1 | # STAGE 1 - Pre-built libs
2 | >>> FROM libs:latest as libs-dev
3 |
4 | # STAGE 2 - BUILD
--------------------
ERROR: failed to solve: libs:latest: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
The Dockerfile for devicemanager looks like this:
# STAGE 1 - Pre-built libs
FROM libs:latest as libs-dev
# ...
Notably, everything works well when I do not try to cross-compile using buildx and instead just use the standard build command to build the images for my native architecture. I am wondering what could possibly be missing such that everything works using build but not buildx.
Edit 1: Responding to BMitch's comment, here are the build logs when creating the libs:latest Docker image using
$ docker buildx build --platform linux/arm/v7 --rm --file ./Dockerfile --tag libs:latest --load ..
[+] Building 32.8s (56/56) FINISHED
=> [internal] load .dockerignore 0.2s
=> => transferring context: 164B 0.0s
=> [internal] load build definition from Dockerfile 0.2s
=> => transferring dockerfile: 3.58kB 0.0s
=> [internal] load metadata for docker.io/library/alpine:3.16.0 0.4s
=> [stage-1 1/36] FROM docker.io/library/alpine:3.16.0#sha256:686d8c9dfa6f3ccfc8230bc3178d23f84eeaf7e457f36f271ab1acc53015037c 0.2s
=> => resolve docker.io/library/alpine:3.16.0#sha256:686d8c9dfa6f3ccfc8230bc3178d23f84eeaf7e457f36f271ab1acc53015037c 0.1s
=> [internal] load build context 0.2s
=> => transferring context: 25.04kB 0.0s
=> CACHED [stage-1 2/36] RUN apk update && apk add bash 0.0s
=> CACHED [stage-1 3/36] RUN apk add libgcc 0.0s
=> CACHED [stage-1 4/36] RUN apk add --no-cache libstdc++ 0.0s
=> CACHED [stage-1 5/36] RUN apk add valgrind 0.0s
=> CACHED [stage-1 6/36] RUN apk add --no-cache python3 0.0s
=> CACHED [stage-1 7/36] RUN apk add --no-cache py3-pip 0.0s
=> CACHED [stage-1 8/36] RUN apk add valgrind 0.0s
=> CACHED [libs-dev 2/15] RUN apk add --no-cache coreutils gcc g++ cargo linux-headers make musl-dev git rust clang-libs pkgconf 0.0s
=> CACHED [libs-dev 3/15] COPY /lib_cfu/ /lib_cfu/ 0.0s
=> CACHED [libs-dev 4/15] RUN cd /lib_cfu && make 0.0s
=> CACHED [libs-dev 5/15] COPY /lib_mxml/ /lib_mxml/ 0.0s
=> CACHED [libs-dev 6/15] RUN cd /lib_mxml && ./configure 0.0s
=> CACHED [libs-dev 7/15] RUN cd /lib_mxml && make 0.0s
=> CACHED [libs-dev 8/15] COPY /lib_googletest/ /lib_googletest/ 0.0s
=> CACHED [libs-dev 9/15] COPY /lib_permissions/ /lib_permissions/ 0.0s
=> CACHED [libs-dev 10/15] COPY /lib_revars/ /lib_revars/ 0.0s
=> CACHED [libs-dev 11/15] RUN cd /lib_revars && make 0.0s
=> CACHED [libs-dev 12/15] COPY /lib_devserver/ /lib_devserver/ 0.0s
=> CACHED [libs-dev 13/15] RUN cd /lib_devserver && make 0.0s
=> CACHED [libs-dev 14/15] COPY /lib_rtpc/ /lib_rtpc/ 0.0s
=> CACHED [libs-dev 15/15] RUN cd /lib_rtpc && make 0.0s
=> CACHED [stage-1 9/36] COPY --from=libs-dev /lib_cfu/*.a /lib_cfu/ 0.0s
=> CACHED [stage-1 10/36] COPY --from=libs-dev /lib_mxml/*.a /lib_mxml/ 0.0s
=> CACHED [stage-1 11/36] COPY --from=libs-dev /lib_revars/hiredis/*.a /lib_revars/hiredis/ 0.0s
=> CACHED [stage-1 12/36] COPY --from=libs-dev /lib_revars/src/*.a /lib_revars/src/ 0.0s
=> CACHED [stage-1 13/36] COPY --from=libs-dev /lib_revars/perf/revars_perf /lib_revars/ 0.0s
=> CACHED [stage-1 14/36] COPY --from=libs-dev /lib_revars/test/revars_test /lib_revars/ 0.0s
=> CACHED [stage-1 15/36] COPY --from=libs-dev /lib_revars/gtest/gtest_revars_tests /lib_revars/ 0.0s
=> CACHED [stage-1 16/36] COPY --from=libs-dev /lib_cfu/*.so /usr/local/lib/ 0.0s
=> CACHED [stage-1 17/36] COPY --from=libs-dev /lib_mxml/*.so /usr/local/lib/ 0.0s
=> CACHED [stage-1 18/36] COPY --from=libs-dev /lib_revars/hiredis/*.so /usr/local/lib/ 0.0s
=> CACHED [stage-1 19/36] COPY --from=libs-dev /lib_revars/src/*.so /usr/local/lib/ 0.0s
=> CACHED [stage-1 20/36] COPY --from=libs-dev /lib_devserver/src/*.a /lib_devserver/src/ 0.0s
=> CACHED [stage-1 21/36] COPY --from=libs-dev /lib_devserver/gtest/gtest_devserver_basic /lib_devserver/ 0.0s
=> CACHED [stage-1 22/36] COPY --from=libs-dev /lib_devserver/demo/devserver_pdm /lib_devserver/ 0.0s
=> CACHED [stage-1 23/36] COPY --from=libs-dev /lib_devserver/demo/devserver_cdm /lib_devserver/ 0.0s
=> CACHED [stage-1 24/36] COPY --from=libs-dev /lib_rtpc/*.a /lib_rtpc/ 0.0s
=> CACHED [stage-1 25/36] COPY --from=libs-dev /lib_rtpc/*.so /lib_rtpc/ 0.0s
=> CACHED [stage-1 26/36] COPY --from=libs-dev /lib_rtpc/*.so.* /lib_rtpc/ 0.0s
=> CACHED [stage-1 27/36] COPY --from=libs-dev /lib_rtpc/cfg/*.xml /lib_rtpc/ 0.0s
=> CACHED [stage-1 28/36] COPY --from=libs-dev /lib_cfu/*.h /usr/local/include/libcfu/ 0.0s
=> CACHED [stage-1 29/36] COPY --from=libs-dev /lib_devserver/inc/*.h /usr/local/include/devserver/ 0.0s
=> CACHED [stage-1 30/36] COPY --from=libs-dev /lib_googletest/googlemock/include/gmock/*.h /usr/local/include/googletest/gmock/ 0.0s
=> CACHED [stage-1 31/36] COPY --from=libs-dev /lib_googletest/googletest/include/gtest/*.h /usr/local/include/googletest/gtest/ 0.0s
=> CACHED [stage-1 32/36] COPY --from=libs-dev /lib_mxml/mxml.h /usr/local/include/mxml/ 0.0s
=> CACHED [stage-1 33/36] COPY --from=libs-dev /lib_permissions/inc/*.h /usr/local/include/permissions/ 0.0s
=> CACHED [stage-1 34/36] COPY --from=libs-dev /lib_revars/inc/*.h /usr/local/include/revars/ 0.0s
=> CACHED [stage-1 35/36] COPY --from=libs-dev /lib_rtpc/inc/*.h /usr/local/include/rtpc/ 0.0s
=> CACHED [stage-1 36/36] WORKDIR /lib_revars 0.0s
=> exporting to oci image format 31.7s
=> => exporting layers 12.4s
=> => exporting manifest sha256:e53ff18b9b24f584ea9b8abe40e795a90c9e024d0ef96b742967019242979d2a 0.1s
=> => exporting config sha256:f193534c0a56fb75c73d2f4cb78dd6349587ba574c5390e1ab7214d21ed19994 0.1s
=> => sending tarball 19.0s
=> importing to docker
I can also see that libs:latest has build successfully when running docker images.

Docker file build failed

I try to build my docker file but it has failed.
I have folder structure in vscode like that:-
In my docker file have only few command:-
COPY requirements.txt /tmp/pip-tmp/
RUN pip install -r /tmp/pip-tmp/requirements.txt \
&& rm -rf /tmp/pip-tmp
I got an error messege:-
PS C:\LearbayDatascience> docker build -t proj:myapp /.devcontainer
unable to prepare context: path "/.devcontainer" not found
PS C:\LearbayDatascience> docker build -t proj:myapp C:/LearbayDatascience/.devcontainer
[+] Building 0.6s (7/8)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for mcr.microsoft.com/vscode/devcontainers/python:0-3.10-bullseye 0.5s
=> [internal] load build context 0.0s
=> => transferring context: 2B 0.0s
=> [1/4] FROM mcr.microsoft.com/vscode/devcontainers/python:0-3.10-bullseye#sha256:21a12816fcadaa16dabb4ba0e8c358361d02ea062b1b89db8786eb67173489d0 0.0s
=> CACHED [2/4] RUN if [ "none" != "none" ]; then su vscode -c "umask 0002 && . /usr/local/share/nvm/nvm.sh && nvm install none 2>&1"; fi 0.0s
=> ERROR [3/4] COPY requirements.txt /tmp/pip-tmp/requirements.txt 0.0s
------
> [3/4] COPY requirements.txt /tmp/pip-tmp/requirements.txt:
------
failed to compute cache key: "/requirements.txt" not found: not found
why requirments.txt is not found when I run the command from same folder structure.
If I rebuild with vscode .devcontainer is loaded successfully.
Please help in this regards.
Try putting the dockerfile in the same dir as the requirements.txt
Then in your Dockerfile
COPY ./requirements.txt /tmp/pip-tmp/requirements.txt
RUN pip install -r /tmp/pip-tmp/requirements.txt \
&& rm -rf /tmp/pip-tmp
After this navigate to the folder where Dockerfile is located and pass as the build command
docker build -t proj:myapp .

Docker Can't Find Install.sh

I created a folder in Windows that has a Dockerfile and an install.sh script. When I attempt to build the docker image, I get an error that says:
=> ERROR [4/4] RUN /install.sh 0.6s
------
> [4/4] RUN /install.sh:
#7 0.584 /bin/sh: 1: /install.sh: not found
This is my Dockerfile:
FROM ubuntu:latest
ADD install.sh /
RUN chmod u+x /install.sh
RUN /install.sh
ENV PATH /root/miniconda3/bin:$PATH
This is my install.sh:
#!/bin/bash
apt-get update
apt-get upgrade -y
apt-get install -y bzip2 gcc git ping htop screen vim wget
apt-get clean
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O Miniconda.sh
bash Miniconda.sh -b
rm -rf Miniconda.sh
export PATH="/root/miniconda3/bin:$PATH"
conda install -y pandas
sh -c "$(wget https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh -O -)"
CMD ["/bin/zsh"]
This is what I am running in Windows to produce the error:
docker build -t app_test:v1.01 .
output:
[+] Building 1.3s (8/8) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 158B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:latest 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 490B 0.0s
=> CACHED [1/4] FROM docker.io/library/ubuntu:latest 0.0s
=> [2/4] ADD install.sh / 0.0s
=> [3/4] RUN chmod u+x /install.sh 0.5s
=> ERROR [4/4] RUN /install.sh 0.6s
------
> [4/4] RUN /install.sh:
#7 0.584 /bin/sh: 1: /install.sh: not found
------
executor failed running [/bin/sh -c /install.sh]: exit code: 127
So far I've tried the following:
renaming the install.sh file to something else
Changing the ADD line to COPY so that the line says COPY install.sh /install.sh
googling the issue to see anyone else has experienced something
similar.
EDIT #1:
tree and ls output:
PS C:\Users\Rick\Documents\experiments> tree
Folder PATH listing
Volume serial number is D49A-7235
C:.
No subfolders exist
PS C:\Users\Rick\Documents\experiments> ls
Directory: C:\Users\Rick\Documents\experiments
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 11/21/2021 11:11 AM 132 Dockerfile
-a--- 11/21/2021 9:05 AM 451 install.sh
EDIT #2 adding RUN ls / to Dockerfile:
[+] Building 2.0s (9/9) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 181B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:latest 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 32B 0.0s
=> [1/5] FROM docker.io/library/ubuntu:latest 0.0s
=> CACHED [2/5] COPY install.sh /install.sh 0.0s
=> [3/5] RUN ls / 0.5s
=> [4/5] RUN chmod u+x /install.sh 0.6s
=> ERROR [5/5] RUN /install.sh 0.7s
------
> [5/5] RUN /install.sh:
#9 0.694 /bin/sh: 1: /install.sh: not found
------
executor failed running [/bin/sh -c /install.sh]: exit code: 127
The cause of the issue was saving the install.sh file using Windows line endings. It needed to be saved using Unix line endings. In Visual Studio Code, change the "CRLF" button to "LF".

Docker build fails with 'secret pip not found: not found' error

I am trying to build a docker image but getting
secret pip not found: not found
Any ideas on this?
Dockerfile:
FROM <jfrog dockerfile package>
SHELL ["/bin/bash", "-c"]
RUN apt-get update \
&& apt-get -y install chromium chromium-driver
COPY requirments.txt
RUN pip install -r requirments.txt
USER nobody
CMD robot ./smoketests-nonprod.robot \
&& robot ./smoketests-prod.robot
The log is as follows:
$ docker build -t robottests .
[+] Building 1.6s (18/25)
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 39B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 35B 0.0s
=> resolve image config for my-company-docker-virtual.jfrog.io/docker/dockerfile:1.2 0.0s
=> CACHED docker-image://my-company-docker-virtual.jfrog.io/docker/dockerfile:1.2 0.0s
=> [internal] load metadata for my-company-docker-virtual.jfrog.io/node:14-buster-slim 0.0s
=> [internal] load metadata for my-company-docker-virtual.jfrog.io/python:3-slim 0.0s
=> [base 1/7] FROM my-company-docker-virtual.jfrog.io/python:3-slim 0.0s
=> [client 1/6] FROM my-company-docker-virtual.jfrog.io/node:14-buster-slim 0.0s
=> [internal] load build context 0.1s
=> => transferring context: 5.25kB 0.0s
=> CACHED [base 2/7] RUN echo 'APT { Default-Release "stable"; };' >/etc/apt/apt.conf && echo deb http://deb.debian.org/debian testing main >>/etc/apt/sources.list 0.0s
=> CACHED [base 3/7] RUN --mount=type=cache,target=/var/cache/apt --mount=type=secret,id=sources.list,target=/etc/apt/sources.list,required=true apt update && apt -y install libcap2-bin/testing 0.0s
=> CACHED [base 4/7] RUN ["/sbin/setcap", "cap_net_bind_service,cap_setpcap+p", "/sbin/capsh"] 0.0s
=> CACHED [base 5/7] WORKDIR /project 0.0s
=> CACHED [base 6/7] COPY pyproject.toml setup.* . 0.0s
=> CACHED [client 2/6] WORKDIR /client 0.0s
=> CACHED [client 3/6] COPY package*.json . 0.0s
=> ERROR [base 7/7] RUN --mount=type=cache,target=/root/.cache --mount=type=secret,id=pip,target=/etc/pip.conf,required=true mkdir -p src && pip install -U pip wheel && pip install . && pip unin 0.1s
=> CANCELED [client 4/6] RUN --mount=type=secret,id=npmrc,target=/usr/local/etc/npmrc,required=true --mount=type=bind,source=.npmrc,target=/root/.npmrc --mount=type=cache,target=/root/.npm npm c 0.2s
------
> [base 7/7] RUN --mount=type=cache,target=/root/.cache --mount=type=secret,id=pip,target=/etc/pip.conf,required=true mkdir -p src && pip install -U pip wheel && pip install . && pip uninstall -y $(./setup.py --name):
------
secret pip not found: not found
Any help would be appreciated
This is using the relatively new --secret option which allows you to mount secrets at build time
The general way you utilize it is you have a secret file outside and assign it an id
in your case, you'd have a pip.conf file somewhere and specify it in your build command:
docker build --secret id=pip,src=pip.conf -t robottests .
this will make the pip.conf available during the build, but not part of your image (presumably because it contains authentication secrets for accessing your internal pypi)
Maybe I'm wrong, but for me you do not show the Dockerfile corresponding to the logs. Or there are some missing parts which could have been helpful.
I'd expect to view something like that in your Dockerfile which is in error :
RUN ["/sbin/setcap", "cap_net_bind_service,cap_setpcap+p", "/sbin/capsh"]
WORKDIR /project
COPY pyproject.toml setup.* .
WORKDIR /client
RUN --mount=type=cache,target=/root/.cache --mount=type=secret,id=pip,target=/etc/pip.conf,required=true mkdir -p src && pip install -U pip wheel && pip install . && pip unin...
Because in this last line, there is the part that fails :
--mount=type=secret,id=pip,target=/etc/pip.conf,required=true
And with the link provided by Anthony Sottile, or this link I think you can be able to find out what is wrong in your command.

Resources