Some unresolved dependencies have extra attributes - docker

I used dockerfile below to build play-samples-play-scala-hello-world-tutorial(https://github.com/playframework/play-samples/tree/2.8.x/play-scala-hello-world-tutorial)
I want to build the tutorial by dockerfile, but got an error like downloading issue. I wonder whether this is issue with network or dockerfile.
FROM openjdk:11
ENV TZ=America/Los_Angeles
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
ENV HULU_ENV=staging
ADD . /play-samples-play-scala-hello-world-tutorial
WORKDIR /play-samples-play-scala-hello-world-tutorial
RUN curl -L https://github.com/sbt/sbt/releases/download/v1.5.2/sbt-1.5.2.tgz -o sbt.tgz
RUN tar xf sbt.tgz
RUN ./sbt/bin/sbt clean stage
but got error below and failed to build
docker build . -f Dockerfile
#11 0.595 copying runtime jar...
#11 69.16 [warn] Note: Some unresolved dependencies have extra attributes. Check that these dependencies exist with the requested attributes.
#11 69.16 [warn] com.typesafe.sbt:sbt-js-engine:1.2.3 (scalaVersion=2.12, sbtVersion=1.0)
#11 69.16 [warn] org.foundweekends.giter8:sbt-giter8-scaffold:0.11.0 (sbtVersion=1.0, scalaVersion=2.12)
#11 69.16 [warn] com.typesafe.sbt:sbt-native-packager:1.5.2 (scalaVersion=2.12, sbtVersion=1.0)
#11 69.16 [warn] com.lightbend.sbt:sbt-javaagent:0.1.5 (scalaVersion=2.12, sbtVersion=1.0)
#11 69.16 [warn] com.typesafe.sbt:sbt-twirl:1.5.1 (scalaVersion=2.12, sbtVersion=1.0)
#11 69.16 [warn] com.typesafe.sbt:sbt-web:1.4.4 (scalaVersion=2.12, sbtVersion=1.0)
#11 69.16 [warn]
#11 69.16 [warn] Note: Unresolved dependencies path:
#11 69.25 [error] Error downloading org.foundweekends.giter8:sbt-giter8-scaffold;sbtVersion=1.0;scalaVersion=2.12:0.11.0
#11 69.25 [error] Not found
#11 69.25 [error] Not found
#11 69.25 [error] not found: https://repo1.maven.org/maven2/org/foundweekends/giter8/sbt-giter8-scaffold_2.12_1.0/0.11.0/sbt-giter8-scaffold-0.11.0.pom
#11 69.25 [error] not found: /root/.ivy2/localorg.foundweekends.giter8/sbt-giter8-scaffold/scala_2.12/sbt_1.0/0.11.0/ivys/ivy.xml
#11 69.25 [error] download error: Caught javax.net.ssl.SSLHandshakeException (Remote host terminated the handshake) while downloading https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/org.foundweekends.giter8/sbt-giter8-scaffold/scala_2.12/sbt_1.0/0.11.0/ivys/ivy.xml
#11 69.25 [error] download error: Caught javax.net.ssl.SSLHandshakeException (Remote host terminated the handshake) while downloading https://repo.typesafe.com/typesafe/ivy-releases/org.foundweekends.giter8/sbt-giter8-scaffold/scala_2.12/sbt_1.0/0.11.0/ivys/ivy.xml
any help is appreciated!

The relevant error is:
download error: Caught javax.net.ssl.SSLHandshakeException (Remote host terminated the handshake) while downloading https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/org.foundweekends.giter8/sbt-giter8-scaffold/scala_2.12/sbt_1.0/0.11.0/ivys/ivy.xml
SBT is not able to connect to the repository to download the dependencies because of some HTTPS issue.
This can be caused by several things, either because your container doesn't have proper certificates or because your running behind a corporate proxy maybe and it mess with certificates.
You should be able to find more help by searching on the SSLHandshakeException error.

Related

How do I avoid a "x509: certificate signed by unknown authority" when doing a "go get download" from an alpine container?

I am trying to build coredns from scratch with the following Dockerfile:
FROM golang:alpine
SHELL [ "/bin/sh", "-ec" ]
RUN apk update && apk add --no-cache git make ca-certificates openssl && update-ca-certificates
RUN git clone https://github.com/coredns/coredns.git
WORKDIR /go/coredns
RUN go get download
RUN make
When I run docker build --no-cache --progress=plain -t coredns . this is the output and error I am getting:
#1 [internal] load build definition from Dockerfile
#1 sha256:5b65661f68f3298655d88d1e83c5014118e9d278e724f83e2f8d968a8f11fe27
#1 transferring dockerfile: 619B done
#1 DONE 0.0s
#2 [internal] load .dockerignore
#2 sha256:2e78fdc563f1836b7815b48a445b2878de57404b5573a93080990b3c49e92f8f
#2 transferring context: 2B done
#2 DONE 0.0s
#3 [internal] load metadata for docker.io/library/golang:alpine
#3 sha256:299327d28eff710219f2e24597cfa9b226e8b1b0dc90f9e2122573004cfe837f
#3 DONE 0.5s
#4 [1/6] FROM docker.io/library/golang:alpine#sha256:2381c1e5f8350a901597d633b2e517775eeac7a6682be39225a93b22cfd0f8bb
#4 sha256:bcd1e622e133c928bad4175797b9e323eb9ac29a1d90fbb12f2566da7e868b8f
#4 CACHED
#5 [2/6] RUN apk update && apk add --no-cache git make ca-certificates openssl && update-ca-certificates
#5 sha256:6dd058a5b7f80d591599c7ab466c65cf38e8d5d1b7ddb8f4d2e5d1c0e79a32f0
#5 0.198 fetch https://dl-cdn.alpinelinux.org/alpine/v3.17/main/x86_64/APKINDEX.tar.gz
#5 0.847 fetch https://dl-cdn.alpinelinux.org/alpine/v3.17/community/x86_64/APKINDEX.tar.gz
#5 1.224 v3.17.1-21-gf40c2ce77f [https://dl-cdn.alpinelinux.org/alpine/v3.17/main]
#5 1.224 v3.17.1-23-g06668be47f [https://dl-cdn.alpinelinux.org/alpine/v3.17/community]
#5 1.224 OK: 17813 distinct packages available
#5 1.280 fetch https://dl-cdn.alpinelinux.org/alpine/v3.17/main/x86_64/APKINDEX.tar.gz
#5 1.753 fetch https://dl-cdn.alpinelinux.org/alpine/v3.17/community/x86_64/APKINDEX.tar.gz
#5 2.043 (1/8) Installing brotli-libs (1.0.9-r9)
#5 2.120 (2/8) Installing nghttp2-libs (1.51.0-r0)
#5 2.182 (3/8) Installing libcurl (7.87.0-r1)
#5 2.257 (4/8) Installing libexpat (2.5.0-r0)
#5 2.314 (5/8) Installing pcre2 (10.42-r0)
#5 2.387 (6/8) Installing git (2.38.2-r0)
#5 2.622 (7/8) Installing make (4.3-r1)
#5 2.686 (8/8) Installing openssl (3.0.7-r2)
#5 2.763 Executing busybox-1.35.0-r29.trigger
#5 2.774 OK: 17 MiB in 24 packages
#5 DONE 2.9s
#6 [3/6] RUN git clone https://github.com/coredns/coredns.git
#6 sha256:aae1eab60ab1f0ffb8d8a48bd03ef02b93bb537b82f1bd4285cfcb2731e19ff4
#6 0.264 Cloning into 'coredns'...
#6 DONE 14.1s
#7 [4/6] WORKDIR /go/coredns
#7 sha256:2291c568fa24f46c6531c6e7d41d5e1150d10485b34e88a85f81542e26295acb
#7 DONE 0.0s
#8 [5/6] RUN go get download
#8 sha256:b2878fe66127be7ffe2e7f4e1f6b538679aebda0abffdd20b14bf928ef23957f
#8 3.603 go: cloud.google.com/go/compute#v1.14.0: Get "https://proxy.golang.org/cloud.google.com/go/compute/#v/v1.14.0.mod": x509: certificate signed by unknown authority
#8 ERROR: executor failed running [/bin/sh -ec go get download]: exit code: 1
------
> [5/6] RUN go get download:
------
executor failed running [/bin/sh -ec go get download]: exit code: 1
I've googled my heart out trying to figure out how to get past the "x509: certificate signed by unknown authority" error. Any help is appreciated.
It looks like the issue was caused by the Cisco AnyConnect client on my Mac. You can uninstall Cisco AnyConect or add the following to your Dockerfile.
RUN wget http://www.cisco.com/security/pki/certs/ciscoumbrellaroot.cer
RUN openssl x509 -inform DER -in ciscoumbrellaroot.cer -out ciscoumbrellaroot.crt
RUN cp ciscoumbrellaroot.crt /usr/local/share/ca-certificates/ciscoumbrellaroot.crt
RUN update-ca-certificates
I found the answer here.

Cloning private SSH Github repo from Cargo with Docker fails

I am a bit lost with Docker, Cargo and SSH. I have this example project https://github.com/Jasperav/ssh-dockerfile. It is a hello world application with a docker file and a private dependency in the toml file. You can replace the dependency with your private dependency and just do docker build -t something ..
I want to create a docker image of my application with a private repository. I can not get it working, even not with the new Buildkit feature (--mount=type=ssh). I tried adding and removing stuff from the Dockerfile. I keep getting errors.
This is the content of my Dockerfile which is a combined effort of stuff I found on the internet:
FROM rust:1.65 AS builder
ENV CARGO_NET_GIT_FETCH_WITH_CLI=true
WORKDIR app
COPY . .
RUN mkdir -p /root/.ssh
RUN ssh-keyscan github.com >> /root/.ssh/known_hosts
RUN --mount=type=ssh cargo build --release
FROM debian:buster-slim
COPY --from=builder ./target/release/docker ./target/release/docker
CMD ["./release/server"]
The error I get is:
> [builder 6/6] RUN --mount=type=ssh cargo build --release:
#12 0.400 Updating git repository `ssh://git#github.com/xxx.git`
#12 1.171 error: failed to get `x` as a dependency of package `hello v0.1.0 (/app)`
#12 1.171
#12 1.171 Caused by:
#12 1.171 failed to load source for dependency `x`
#12 1.171
#12 1.171 Caused by:
#12 1.171 Unable to update ssh://git#github.com/xx.git
#12 1.171
#12 1.171 Caused by:
#12 1.171 failed to clone into: /usr/local/cargo/git/db/xx
#12 1.171
#12 1.171 Caused by:
#12 1.171 process didn't exit successfully: `git fetch --force --update-head-ok 'ssh://git#github.com/xx.git' '+HEAD:refs/remotes/origin/HEAD'` (exit status: 128)
#12 1.171 --- stderr
#12 1.171 Warning: Permanently added the ECDSA host key for IP address '140.82.121.4' to the list of known hosts.
#12 1.171 git#github.com: Permission denied (publickey).
#12 1.171 fatal: Could not read from remote repository.
#12 1.171
#12 1.171 Please make sure you have the correct access rights
#12 1.171 and the repository exists.
------
executor failed running [/bin/sh -c cargo build --release]: exit code: 101
I am hoping that the SSH works from the host, but if I need to pass something in as build argument, it is also fine. The thing that did work is by having the access keys inside my dependency inside the Cargo.toml file, but that is really leaking too much info (and github will revoke the access key everytime I commit).
I can run the application fine without Docker. git clone also just works.
Turns out I was missing the --ssh default argument. It worked when I did build docker like this:
docker build -t name --ssh default .

Build multi-arch docker image

I try to build an ansible image for amd64 and arm64 using docker buildx but my build always fails, it's like the builder can't support another arch than the one running on the current hardware I am running debian and I installed qemu-user-static and binfmt-support so docker buildx ls gives the following result
NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
hopeful_wilson * docker-container
hopeful_wilson0 unix:///var/run/docker.sock running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
inspiring_lamarr docker-container
inspiring_lamarr0 unix:///var/run/docker.sock running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
default docker
default default running linux/amd64, linux/386, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/arm/v7, linux/arm/v6
and my dockerfile
FROM python:alpine3.15
ADD . /tmp
WORKDIR /tmp
RUN adduser -D -s /bin/sh -h /home/ansible ansible ansible
RUN apk update && apk add openssh-client bash
RUN rm -rf /var/cache/apk/*
RUN python3 -m pip install --upgrade pip
RUN python3 -m pip install -r requirements.txt
# RUN python3 -m pip install ansible
RUN sed -i "s#/bin/sh#/bin/bash#g" /etc/passwd
WORKDIR /
USER ansible
to build I run
docker buildx build --platform linux/amd64,linux/arm64 -t ansible-multi-arch .
And here the result of my build
WARNING: No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 121.4s (24/25)
=> [internal] booting buildkit 5.8s
=> => pulling image moby/buildkit:buildx-stable-1 1.9s
=> => creating container buildx_buildkit_competent_bohr0 3.9s
=> [internal] load build definition from Dockerfile 0.8s
=> => transferring dockerfile: 419B 0.0s
=> [internal] load .dockerignore 0.9s
=> => transferring context: 2B 0.0s
=> [linux/amd64 internal] load metadata for docker.io/library/python:alpine3.15 3.0s
=> [linux/arm64 internal] load metadata for docker.io/library/python:alpine3.15 3.2s
=> [auth] library/python:pull token for registry-1.docker.io 0.0s
=> [linux/arm64 1/10] FROM docker.io/library/python:alpine3.15#sha256:74d722200c8cd876dcbd5cfb1d093c916e85d4318f051c2f3cfe5067c27cfbd5 12.9s
=> => resolve docker.io/library/python:alpine3.15#sha256:74d722200c8cd876dcbd5cfb1d093c916e85d4318f051c2f3cfe5067c27cfbd5 0.3s
=> => sha256:7025a0ca8e87cb11983c19a8a09007c64a9ebe5d2f9efe96b36658d2221721b7 231B / 231B 0.5s
=> => sha256:a1b0595ea6d26aa03a510d790d1fde94888af6ca5914982b6c9b6fd03c7a620c 3.04MB / 3.04MB 1.5s
=> => sha256:938e9f93fe23985a1f4d1d090e4ced82c31b2fabd4922b7761e42cfe7b56b9d7 12.67MB / 12.67MB 3.3s
=> => sha256:5e18021c0d0bf0a6f79a44bbbd33f12adeb2ef1358b140d8e7ef36e20c1e63b3 682.39kB / 682.39kB 0.7s
=> => sha256:47517142f6ba87eca6b7bdca1e0df160b74671c81e4b9605dad38c1862a43be3 2.72MB / 2.72MB 1.2s
=> => extracting sha256:47517142f6ba87eca6b7bdca1e0df160b74671c81e4b9605dad38c1862a43be3 0.3s
=> => extracting sha256:5e18021c0d0bf0a6f79a44bbbd33f12adeb2ef1358b140d8e7ef36e20c1e63b3 0.5s
=> => extracting sha256:938e9f93fe23985a1f4d1d090e4ced82c31b2fabd4922b7761e42cfe7b56b9d7 1.6s
=> => extracting sha256:7025a0ca8e87cb11983c19a8a09007c64a9ebe5d2f9efe96b36658d2221721b7 1.7s
=> => extracting sha256:a1b0595ea6d26aa03a510d790d1fde94888af6ca5914982b6c9b6fd03c7a620c 1.6s
=> [internal] load build context 1.4s
=> => transferring context: 49.12kB 0.0s
=> [linux/amd64 1/10] FROM docker.io/library/python:alpine3.15#sha256:74d722200c8cd876dcbd5cfb1d093c916e85d4318f051c2f3cfe5067c27cfbd5 9.0s
=> => resolve docker.io/library/python:alpine3.15#sha256:74d722200c8cd876dcbd5cfb1d093c916e85d4318f051c2f3cfe5067c27cfbd5 0.3s
=> => sha256:4211f440f0679002fb62db619c32ebb8894e25c77a19249fdf742fd4dbfb6555 3.04MB / 3.04MB 0.7s
=> => sha256:e8792c1c2edc87ab51a97c35dd511344734a625d1e67e3fd27dcfaa37ebe8eaf 231B / 231B 0.3s
=> => sha256:abed0206f3914209d0e7a549b92f3b0c85b421285ab998e63ea64d093f71289f 681.67kB / 681.67kB 1.3s
=> => sha256:9621f1afde84053b2f9b6ff34fc7f7460712247c01cbab483c5fa7132cf782ca 2.82MB / 2.82MB 1.1s
=> => sha256:0b0ae0fe5b972748ea6475feec4cd2238797fd89b8870a9f4a572f29488e5f88 12.58MB / 12.58MB 2.8s
=> => extracting sha256:9621f1afde84053b2f9b6ff34fc7f7460712247c01cbab483c5fa7132cf782ca 0.4s
=> => extracting sha256:abed0206f3914209d0e7a549b92f3b0c85b421285ab998e63ea64d093f71289f 0.7s
=> => extracting sha256:0b0ae0fe5b972748ea6475feec4cd2238797fd89b8870a9f4a572f29488e5f88 0.5s
=> => extracting sha256:e8792c1c2edc87ab51a97c35dd511344734a625d1e67e3fd27dcfaa37ebe8eaf 0.3s
=> => extracting sha256:4211f440f0679002fb62db619c32ebb8894e25c77a19249fdf742fd4dbfb6555 1.6s
=> [linux/amd64 2/10] ADD . /tmp 7.0s
=> [linux/arm64 2/10] ADD . /tmp 3.5s
=> [linux/arm64 3/10] WORKDIR /tmp 1.9s
=> [linux/amd64 3/10] WORKDIR /tmp 1.9s
=> [linux/arm64 4/10] RUN adduser -D -s /bin/sh -h /home/ansible ansible ansible 1.6s
=> [linux/amd64 4/10] RUN adduser -D -s /bin/sh -h /home/ansible ansible ansible 1.6s
=> [linux/arm64 5/10] RUN apk update && apk add openssh-client bash 4.3s
=> [linux/amd64 5/10] RUN apk update && apk add openssh-client bash 2.6s
=> [linux/amd64 6/10] RUN rm -rf /var/cache/apk/* 0.5s
=> [linux/amd64 7/10] RUN python3 -m pip install --upgrade pip 5.9s
=> [linux/arm64 6/10] RUN rm -rf /var/cache/apk/* 1.2s
=> [linux/arm64 7/10] RUN python3 -m pip install --upgrade pip 23.5s
=> [linux/amd64 8/10] RUN python3 -m pip install -r requirements.txt 43.1s
=> ERROR [linux/arm64 8/10] RUN python3 -m pip install -r requirements.txt 61.9s
=> [linux/amd64 9/10] RUN sed -i "s#/bin/sh#/bin/bash#g" /etc/passwd 1.4s
------
> [linux/arm64 8/10] RUN python3 -m pip install -r requirements.txt:
#0 3.679 Collecting ansible==6.3.0
#0 3.820 Downloading ansible-6.3.0-py3-none-any.whl (41.0 MB)
#0 8.925 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 41.0/41.0 MB 5.1 MB/s eta 0:00:00
#0 10.81 Collecting ansible-core==2.13.3
#0 10.84 Downloading ansible_core-2.13.3-py3-none-any.whl (2.1 MB)
#0 11.12 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 7.8 MB/s eta 0:00:00
#0 12.29 Collecting cffi==1.15.1
#0 12.32 Downloading cffi-1.15.1.tar.gz (508 kB)
#0 12.77 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 508.5/508.5 kB 1.2 MB/s eta 0:00:00
#0 13.15 Preparing metadata (setup.py): started
#0 17.85 Preparing metadata (setup.py): finished with status 'done'
#0 18.91 Collecting cryptography==37.0.4
#0 18.95 Downloading cryptography-37.0.4-cp36-abi3-musllinux_1_1_aarch64.whl (4.1 MB)
#0 19.47 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.1/4.1 MB 8.2 MB/s eta 0:00:00
#0 20.18 Collecting Jinja2==3.1.2
#0 20.22 Downloading Jinja2-3.1.2-py3-none-any.whl (133 kB)
#0 20.68 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.1/133.1 kB 291.5 kB/s eta 0:00:00
#0 21.19 Collecting MarkupSafe==2.1.1
#0 21.22 Downloading MarkupSafe-2.1.1-cp310-cp310-musllinux_1_1_aarch64.whl (30 kB)
#0 22.11 Collecting packaging==21.3
#0 22.15 Downloading packaging-21.3-py3-none-any.whl (40 kB)
#0 22.48 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 40.8/40.8 kB 96.3 kB/s eta 0:00:00
#0 23.14 Collecting pycparser==2.21
#0 23.17 Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB)
#0 23.25 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 118.7/118.7 kB 2.6 MB/s eta 0:00:00
#0 23.55 Collecting pyparsing==3.0.9
#0 23.58 Downloading pyparsing-3.0.9-py3-none-any.whl (98 kB)
#0 23.66 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 98.3/98.3 kB 2.1 MB/s eta 0:00:00
#0 23.94 Collecting PyYAML==6.0
#0 23.97 Downloading PyYAML-6.0.tar.gz (124 kB)
#0 24.25 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 125.0/125.0 kB 479.8 kB/s eta 0:00:00
#0 25.03 Installing build dependencies: started
#0 39.09 Installing build dependencies: finished with status 'done'
#0 39.09 Getting requirements to build wheel: started
#0 46.71 Getting requirements to build wheel: finished with status 'done'
#0 46.73 Preparing metadata (pyproject.toml): started
#0 48.48 Preparing metadata (pyproject.toml): finished with status 'done'
#0 48.66 Collecting resolvelib==0.8.1
#0 48.70 Downloading resolvelib-0.8.1-py2.py3-none-any.whl (16 kB)
#0 49.30 Building wheels for collected packages: cffi, PyYAML
#0 49.30 Building wheel for cffi (setup.py): started
#0 50.82 Building wheel for cffi (setup.py): finished with status 'error'
#0 50.86 error: subprocess-exited-with-error
#0 50.86
#0 50.86 × python setup.py bdist_wheel did not run successfully.
#0 50.86 │ exit code: 1
#0 50.86 ╰─> [47 lines of output]
#0 50.86
#0 50.86 No working compiler found, or bogus compiler options passed to
#0 50.86 the compiler from Python's standard "distutils" module. See
#0 50.86 the error messages above. Likely, the problem is not related
#0 50.86 to CFFI but generic to the setup.py of any Python package that
#0 50.86 tries to compile C code. (Hints: on OS/X 10.8, for errors about
#0 50.86 -mno-fused-madd see http://stackoverflow.com/questions/22313407/
#0 50.86 Otherwise, see https://wiki.python.org/moin/CompLangPython or
#0 50.86 the IRC channel #python on irc.libera.chat.)
#0 50.86
#0 50.86 Trying to continue anyway. If you are trying to install CFFI from
#0 50.86 a build done in a different context, you can ignore this warning.
#0 50.86
#0 50.86 /usr/local/lib/python3.10/site-packages/setuptools/config/setupcfg.py:463: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
#0 50.86 warnings.warn(msg, warning_class)
#0 50.86 running bdist_wheel
#0 50.86 running build
#0 50.86 running build_py
#0 50.86 creating build
#0 50.86 creating build/lib.linux-aarch64-cpython-310
#0 50.86 creating build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/setuptools_ext.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/error.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/vengine_cpy.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/pkgconfig.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/commontypes.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/backend_ctypes.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/lock.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/vengine_gen.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/api.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/verifier.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/model.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/cparser.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/cffi_opcode.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/__init__.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/recompiler.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/ffiplatform.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/_cffi_include.h -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/parse_c_type.h -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/_embedding.h -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 copying cffi/_cffi_errors.h -> build/lib.linux-aarch64-cpython-310/cffi
#0 50.86 running build_ext
#0 50.86 building '_cffi_backend' extension
#0 50.86 creating build/temp.linux-aarch64-cpython-310
#0 50.86 creating build/temp.linux-aarch64-cpython-310/c
#0 50.86 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -DFFI_BUILDING=1 -I/usr/include/ffi -I/usr/include/libffi -I/usr/local/include/python3.10 -c c/_cffi_backend.c -o build/temp.linux-aarch64-cpython-310/c/_cffi_backend.o
#0 50.86 error: command 'gcc' failed: No such file or directory
#0 50.86 [end of output]
#0 50.86
#0 50.86 note: This error originates from a subprocess, and is likely not a problem with pip.
#0 50.87 ERROR: Failed building wheel for cffi
#0 50.87 Running setup.py clean for cffi
#0 52.15 Building wheel for PyYAML (pyproject.toml): started
#0 54.17 Building wheel for PyYAML (pyproject.toml): finished with status 'done'
#0 54.17 Created wheel for PyYAML: filename=PyYAML-6.0-cp310-cp310-linux_aarch64.whl size=45335 sha256=6180755536e9685dbf4baacd3bfdc04e71f7c56e2467f12a2f994e33065dd5bb
#0 54.18 Stored in directory: /root/.cache/pip/wheels/1d/f3/b4/4aea0992adbed14b36ce9c3857d3707c762a4374479230685d
#0 54.19 Successfully built PyYAML
#0 54.19 Failed to build cffi
#0 55.52 Installing collected packages: resolvelib, PyYAML, pyparsing, pycparser, MarkupSafe, packaging, Jinja2, cffi, cryptography, ansible-core, ansible
#0 58.06 Running setup.py install for cffi: started
#0 59.47 Running setup.py install for cffi: finished with status 'error'
#0 59.50 error: subprocess-exited-with-error
#0 59.50
#0 59.50 × Running setup.py install for cffi did not run successfully.
#0 59.50 │ exit code: 1
#0 59.50 ╰─> [49 lines of output]
#0 59.50
#0 59.50 No working compiler found, or bogus compiler options passed to
#0 59.50 the compiler from Python's standard "distutils" module. See
#0 59.50 the error messages above. Likely, the problem is not related
#0 59.50 to CFFI but generic to the setup.py of any Python package that
#0 59.50 tries to compile C code. (Hints: on OS/X 10.8, for errors about
#0 59.50 -mno-fused-madd see http://stackoverflow.com/questions/22313407/
#0 59.50 Otherwise, see https://wiki.python.org/moin/CompLangPython or
#0 59.50 the IRC channel #python on irc.libera.chat.)
#0 59.50
#0 59.50 Trying to continue anyway. If you are trying to install CFFI from
#0 59.50 a build done in a different context, you can ignore this warning.
#0 59.50
#0 59.50 /usr/local/lib/python3.10/site-packages/setuptools/config/setupcfg.py:463: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
#0 59.50 warnings.warn(msg, warning_class)
#0 59.50 running install
#0 59.50 /usr/local/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
#0 59.50 warnings.warn(
#0 59.50 running build
#0 59.50 running build_py
#0 59.50 creating build
#0 59.50 creating build/lib.linux-aarch64-cpython-310
#0 59.50 creating build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/setuptools_ext.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/error.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/vengine_cpy.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/pkgconfig.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/commontypes.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/backend_ctypes.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/lock.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/vengine_gen.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/api.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/verifier.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/model.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/cparser.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/cffi_opcode.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/__init__.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/recompiler.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/ffiplatform.py -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/_cffi_include.h -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/parse_c_type.h -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/_embedding.h -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 copying cffi/_cffi_errors.h -> build/lib.linux-aarch64-cpython-310/cffi
#0 59.50 running build_ext
#0 59.50 building '_cffi_backend' extension
#0 59.50 creating build/temp.linux-aarch64-cpython-310
#0 59.50 creating build/temp.linux-aarch64-cpython-310/c
#0 59.50 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -DFFI_BUILDING=1 -I/usr/include/ffi -I/usr/include/libffi -I/usr/local/include/python3.10 -c c/_cffi_backend.c -o build/temp.linux-aarch64-cpython-310/c/_cffi_backend.o
#0 59.50 error: command 'gcc' failed: No such file or directory
#0 59.50 [end of output]
#0 59.50
#0 59.50 note: This error originates from a subprocess, and is likely not a problem with pip.
#0 59.51 error: legacy-install-failure
#0 59.51
#0 59.51 × Encountered error while trying to install package.
#0 59.51 ╰─> cffi
#0 59.51
#0 59.51 note: This is an issue with the package mentioned above, not pip.
#0 59.51 hint: See above for output from the failure.
------
Dockerfile:10
--------------------
8 | RUN rm -rf /var/cache/apk/*
9 | RUN python3 -m pip install --upgrade pip
10 | >>> RUN python3 -m pip install -r requirements.txt
11 | # RUN python3 -m pip install ansible
12 | RUN sed -i "s#/bin/sh#/bin/bash#g" /etc/passwd
--------------------
error: failed to solve: process "/bin/sh -c python3 -m pip install -r requirements.txt" did not complete successfully: exit code: 1
Any Idea ?
Thank you
When the buildx inspect shows the platform list here:
hopeful_wilson * docker-container
hopeful_wilson0 unix:///var/run/docker.sock running linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/mips64le, linux/mips64, linux/arm/v7, linux/arm/v6
the qemu and buildx configuration is properly done. Since the build gets past several RUN steps, that verifies it even further. The build step that did fail threw a few errors including:
#0 50.86 No working compiler found, or bogus compiler options passed to
#0 50.86 the compiler from Python's standard "distutils" module. See
#0 50.86 the error messages above. Likely, the problem is not related
#0 50.86 to CFFI but generic to the setup.py of any Python package that
#0 50.86 tries to compile C code. (Hints: on OS/X 10.8, for errors about
#0 50.86 -mno-fused-madd see http://stackoverflow.com/questions/22313407/
#0 50.86 Otherwise, see https://wiki.python.org/moin/CompLangPython or
#0 50.86 the IRC channel #python on irc.libera.chat.)
#0 50.86
#0 50.86 Trying to continue anyway. If you are trying to install CFFI from
#0 50.86 a build done in a different context, you can ignore this warning.
...
#0 50.86 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -DFFI_BUILDING=1 -I/usr/include/ffi -I/usr/include/libffi -I/usr/local/include/python3.10 -c c/_cffi_backend.c -o build/temp.linux-aarch64-cpython-310/c/_cffi_backend.o
#0 50.86 error: command 'gcc' failed: No such file or directory
#0 50.86 [end of output]
which indicates the build needed gcc and failed to perform the compile without it. You can attempt to change the package install line to include gcc:
RUN apk add openssh-client bash gcc
But that's no guarantee the underlying application supports arm64. If you have that other platform available, it's useful to attempt to build directly on that platform with a native docker build before attempting to emulate the system and run a docker buildx build from another architecture. From the comments, it sounds like that has failed, and you'll need to work with the application to fix that.

"Docker Image" build fails due to failure in "JupyterLab" installation

I am trying to build a docker image for using a package named as Automated Recommendation Tool. As per their docker workflow I installed docker on my Ubuntu OS and then tried to build the docker image. Following is the command that I executed -
DOCKER_BUILDKIT=1 \
docker build -f docker/Dockerfile \
--pull \
--no-cache \
-t jbei/art .
After running for a while I got the following error -
=> ERROR [jupyter-install 1/1] RUN set -ex && poetry install --n 130.5s
It automatically continued running and later on stopped by giving following output -
#13 129.9 - `minimize`: This option controls whether your JS bundle is minified
#13 129.9 during the Webpack build, which helps to improve JupyterLab's overall
#13 129.9 performance. However, the minifier plugin used by Webpack is very memory
#13 129.9 intensive, so turning it off may help the build finish successfully in
#13 129.9 low-memory environments.
#13 129.9
#13 130.0 An error occurred.
#13 130.0 RuntimeError: JupyterLab failed to build
#13 130.0 See the log file for details: /tmp/jupyterlab-debug-c399mxqe.log
------
executor failed running [/bin/sh -c set -ex && poetry install --no-dev --extras "docker jupyter" --no-root --no-interaction -vv && rm -rf $ART_USER/.cache/ && jupyter lab build && jupyter labextension install #jupyter-widgets/jupyterlab-manager && find ${ART_CODE} -name __pycache__ | xargs rm -rf]: exit code: 1
Anyone experienced in building Docker Images, Please help me.
Following are my system specifications -
Operating System: Ubuntu 22.04 LTS
Kernel: Linux 5.15.0-37-generic
Architecture: x86-64
Hardware Vendor: HP
Hardware Model: HP Pavilion Gaming Laptop 15-ec1xxx
Docker Version -
Docker version 20.10.17, build 100c701
Update:
Following is the complete output of the jupyter lab build command -
#10 39.59 + rm -rf artuser/.cache/
#10 39.59 + jupyter lab build
#10 40.91 [LabBuildApp] JupyterLab 3.4.2
#10 40.91 [LabBuildApp] Building in /usr/local/art/.venv/share/jupyter/lab
#10 41.26 [LabBuildApp] Building jupyterlab assets (production, minimized)
#10 41.28-Build failed.
#10 122.8 Troubleshooting: If the build failed due to an out-of-memory error, you
#10 122.8 may be able to fix it by disabling the `dev_build` and/or `minimize` options.
#10 122.8
#10 122.8 If you are building via the `jupyter lab build` command, you can disable
#10 122.8 these options like so:
#10 122.8
#10 122.8 jupyter lab build --dev-build=False --minimize=False
#10 122.8
#10 122.8 You can also disable these options for all JupyterLab builds by adding these
#10 122.8 lines to a Jupyter config file named `jupyter_config.py`:
#10 122.8
#10 122.8 c.LabBuildApp.minimize = False
#10 122.8 c.LabBuildApp.dev_build = False
#10 122.8
#10 122.8 If you don't already have a `jupyter_config.py` file, you can create one by
#10 122.8 adding a blank file of that name to any of the Jupyter config directories.
#10 122.8 The config directories can be listed by running:
#10 122.8
#10 122.8 jupyter --paths
#10 122.8
#10 122.8 Explanation:
#10 122.8
#10 122.8 - `dev-build`: This option controls whether a `dev` or a more streamlined
#10 122.8 `production` build is used. This option will default to `False` (i.e., the
#10 122.8 `production` build) for most users. However, if you have any labextensions
#10 122.8 installed from local files, this option will instead default to `True`.
#10 122.8 Explicitly setting `dev-build` to `False` will ensure that the `production`
#10 122.8 build is used in all circumstances.
#10 122.8
#10 122.8 - `minimize`: This option controls whether your JS bundle is minified
#10 122.8 during the Webpack build, which helps to improve JupyterLab's overall
#10 122.8 performance. However, the minifier plugin used by Webpack is very memory
#10 122.8 intensive, so turning it off may help the build finish successfully in
#10 122.8 low-memory environments.
#10 122.8
#10 122.8 An error occurred.
#10 122.8 RuntimeError: JupyterLab failed to build
#10 122.8 See the log file for details: /tmp/jupyterlab-debug-iguri15x.log
------
executor failed running [/bin/sh -c set -ex && poetry install --no-dev --extras "docker jupyter" --no-root --no-interaction -vv && rm -rf $ART_USER/.cache/ && jupyter lab build && jupyter labextension install #jupyter-widgets/jupyterlab-manager && find ${ART_CODE} -name __pycache__ | xargs rm -rf]: exit code: 1
From these lines I can see that it is suggesting to do the following changes in my jupyter lab build command in the Dockerfile -
jupyter lab build --dev-build=False --minimize=False
Will this help, or should I first check the log files which I am unable to locate as the image is not built.

docker build : getting-started tutorial => Certificate error

I'm having troubles with the getting started tutorial of docs.docker.com :
https://docs.docker.com/get-started/02_our_app/
When i execute the following command :
docker build -t getting-started .
I get the following errors :
> [2/5] RUN apk add --no-cache python2 g++ make:
#5 0.412 fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
#5 0.551 139899692677960:error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1914:
#5 0.552 WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.14/main: Permission denied
#5 0.552 fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
#5 0.603 139899692677960:error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1914:
#5 0.604 WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.14/community: Permission denied
#5 0.604 ERROR: unable to select packages:
#5 0.605 g++ (no such package):
#5 0.605 required by: world[g++]
#5 0.605 make (no such package):
#5 0.605 required by: world[make]
#5 0.605 python2 (no such package):
#5 0.605 required by: world[python2]
------
executor failed running [/bin/sh -c apk add --no-cache python2 g++ make]: exit code: 3
I'm on Windows 10 V1909 and i downloaded WSL 2 like specified in the tutorial.
EDIT :
Like Hans Kilian answered, it was a VPN problem...
Like Hans Kilian answered, it was a VPN problem...
Be careful if you are behind a proxy.

Resources