I am having a very weird issue when building an armv7 docker image using docker buildx, but not when building it natively on armv7 hardware.
Here is a very simple docker image:
FROM ubuntu:20.04
ARG ARCH
RUN apt-get update && \
apt-get install -y curl wget
# Install Go
ENV GOLANG_VERSION 1.15.8
RUN set -eux; \
\
url="https://golang.org/dl/go${GOLANG_VERSION}.linux-${ARCH}.tar.gz"; \
wget -O go.tgz "$url"; \
tar -C /usr/local -xzf go.tgz; \
rm go.tgz; \
export PATH="/usr/local/go/bin:$PATH"; \
go version
I can build the image for arm64 both on macos as well on a raspberrypi just fine. No such luck when building it for armv7 tho.
I am building the image on macos using buildx as follows:
docker buildx build --platform linux/arm/v7 -t test:armv7 --build-arg ARCH=armv6l .
This fails with a certificate error when connecting to golang.org:
#7 0.378 Resolving golang.org (golang.org)... 142.250.185.113, 2a00:1450:4001:80f::2011
#7 0.448 Connecting to golang.org (golang.org)|142.250.185.113|:443... connected.
#7 0.682 ERROR: cannot verify golang.org's certificate, issued by 'CN=GTS CA 1O1,O=Google Trust Services,C=US':
#7 0.682 Unable to locally verify the issuer's authority.
#7 0.688 To connect to golang.org insecurely, use `--no-check-certificate'.
However if I build the exact same image natively on armv7 (raspberry pi 2b) it works just fine:
docker build -t test:armv7 --build-arg ARCH=armv6l .
Needless to say I am very confused why one works and the other one doesn't.
add to wget command to do not check a ssl certificate -k argument
for you
FROM ubuntu:20.04
ARG ARCH
RUN apt-get update && \
apt-get install -y curl wget
# Install Go
ENV GOLANG_VERSION 1.15.8
RUN set -eux; \
\
url="https://golang.org/dl/go${GOLANG_VERSION}.linux-${ARCH}.tar.gz"; \
wget -k -O go.tgz "$url"; \
tar -C /usr/local -xzf go.tgz; \
rm go.tgz; \
export PATH="/usr/local/go/bin:$PATH"; \
go version
best way it will be install the ca-cerificates pacage. do this in apt-get install -y curl wget ca-certificates
Related
I've downloaded Minikube and I'm using it in my application.
I've prepared my local docker command to use the one provided by minikube with eval $(minikube docker-env)”
Finally, I'm using docker-compose with commands like docker-compose build myimage and I'm getting the following error:
failed to get status: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing unable to upgrade to h2c, received 404"
Any idea what could we the problem? Except this, I find that docker-compose and docker is behaving as I was expecting
The relevant section of the docker-compose.yml is
myservice:
build:
context: .
dockerfile: Dockerfile
image: myimage
And for the Dockerfile:
FROM python:3.8-slim
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get -y upgrade && \
apt-get -y install \
build-essential \
gettext-base \
libffi-dev \
libldap2-dev \
libmagic1 \
libsasl2-dev \
libssl-dev \
libxml2-dev \
libxmlsec1-dev \
libxslt1-dev \
libyaml-dev \
pkg-config \
&& \
apt-get clean && rm -rf /var/lib/apt/lists/*
RUN pip install --upgrade pip pipenv && rm -rf ~/.cache/pip
ENV PYTHONPATH=/opt/app/src:/opt/app/src/vendor
RUN mkdir -p /opt/app
COPY build/Pipfile build/Pipfile.lock /tmp/
WORKDIR /tmp
RUN pipenv install --system && rm -rf ~/.cache/pip{,env,-tools}
Also, I want to insist that this works perfectly when using it locally. It's only when I try to use it on minikube when it starts failing
According to the information I found in github there are two possible causes for this behavior, one user is getting the same error as you and turning off the ‘Experimental Features’ options solved the issue, another user had to downlevel the docker version and rebuild the deployment.
I experienced the same issue building within Gitlab CI building on a Debian 11 image, docker 20.10.5 and compose 2.5.1.
I was able to work around it by setting DOCKER_BUILDKIT=0 in the build environment.
When I build my Dockerfile image on my Macbook M1, I begin to receive errors in regards to syslinux specifically, and if I were to comment this out I continue to receive errors such as this:
fetch http://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.13/main: UNTRUSTED signature
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.13/main: No such file or directory
fetch http://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.13/community: UNTRUSTED signature
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.13/community: No such file or directory
So I know the issue revolves around my repositories that I use so this is where I have the ENTRYPOINT say this in my Dockerfile:
ENTRYPOINT /src/aports/scripts/mkimage.sh \
--tag v3.13 \
--outdir /build \
--arch x86_64 \
--repository http://dl-cdn.alpinelinux.org/alpine/v3.13/main \
--extra-repository http://dl-cdn.alpinelinux.org/alpine/v3.13/community \
--profile iot
I would believe this would work on my M1 but it doesn't! I used other another Macbook and that builds it but why not the M1? I would greatly appreciate any help in this.
EDIT 2: Adding full Dockerfile:
# This image contains the build environment for edge appliance install ISOs
FROM alpine:3.13
# Define metadata
LABEL maintainer="this_dude#dude.net"
# Configure user
RUN addgroup root this_build
# Initialize update and upgrade on Alpine AMI
RUN apk -U upgrade
# Install dependencies
RUN apk add --no-cache \
alpine-conf \
alpine-sdk \
apk-tools \
dosfstools \
grub-efi \
mtools \
squashfs-tools \
syslinux \
xorriso
WORKDIR /src
# Clone alpine ports repository containing the iso builder
RUN git clone --depth=1 --branch v3.13.2 git://git.alpinelinux.org/aports
RUN chmod +x aports/scripts/mkimage.sh
# Include edge appliance image profile
RUN ln -sf /build/mkimg.run.sh /src/aports/scripts/mkimg.run.sh
WORKDIR /build
# Run ISO build
ENTRYPOINT /src/aports/scripts/mkimage.sh \
--tag v3.13 \
--outdir /build \
--arch x86_64 \
--repository http://dl-cdn.alpinelinux.org/alpine/v3.13/main \
--extra-repository http://dl-cdn.alpinelinux.org/alpine/v3.13/community \
--profile iot
As you can see here https://pkgs.alpinelinux.org/packages?name=syslinux the syslinux bootloader package has not support for aarch64 (M1 processors). I would suggest to use another bootloader with AMD and ARM support - for example https://pkgs.alpinelinux.org/packages?name=u-boot&branch=edge.
And don't forget to change that --arch x86_64 argument in your entrypoint to --arch aarch64 if you want to run it without errors on your M1 processor. Or just remove it to use default_arch from the sh script.
I would like to install aws-cli for below images but I received below error. I tried with apk, apt but none of then did not work. Can you please help how should I update my dockerfile?
I do not want to change my base image, I need to use maven:3.6.3-openjdk-14.
sh: apt-get: command not found
FROM maven:3.6.3-openjdk-14
RUN apt-get update \
&& apt-get install -y vim jq unzip curl \
&& apt-get upgrade -y \
#install aws 2
RUN curl --silent --show-error --fail "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
./aws/install && \
rm -rf awscliv2.zip
Docker image maven:3.6.3-openjdk-14 is based on Oracle Linux which uses rpm to manage packages, so apt-get is not available.
docker run -i -t maven:3.6.3-openjdk-14 -- cat /etc/os-release
I'm trying to build a Docker image including a very particular configuration of OpenCV with CUDA and GPU support.
The build succeeds, and if I make install it from the same context that built the image, it works with no problems.
The problem happens when I try to use a multi stage build, to avoid keeping all the dependencies needed to build OpenCV. Before you continue reading, what follows might actually be an XY problem, if you have a better solution on how to copy OpenCV build artifacts (including Python bindings!) in a Docker multistage build, that is my actual intent.
Now for my attempted solution and the struggle I have:
I run COPY --from=requirements /opencv /opencv and it works and it apparently copies everything in the right path (I checked the filesystem). But, when I run from the build folder make install, I get this CMake error:
CMake Error: The source directory "" does not exist.
Specify --help for usage, or press the help button on the CMake GUI.
Makefile:2724: recipe for target 'cmake_check_build_system' failed
make: *** [cmake_check_build_system] Error 1
Again, the same command, from the same folder, but without multistage build, works.
Here is my Dockerfile:
# Stage 1: Build
FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04 AS requirements
# Install dependencies
RUN echo "deb http://es.archive.ubuntu.com/ubuntu eoan main universe" | tee -a /etc/apt/sources.list
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install build-essential cmake unzip pkg-config libjpeg-dev libpng-dev libtiff-dev libavcodec-dev \
libavformat-dev libswscale-dev libv4l-dev libxvidcore-dev libx264-dev libgtk-3-dev libatlas-base-dev \
gfortran python3-dev libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libxvidcore-dev x264 \
libx264-dev libfaac-dev libmp3lame-dev libtheora-dev libfaac-dev libmp3lame-dev libvorbis-dev \
libjpeg-dev libpng-dev libtiff-dev git python3-pip libtbb-dev libprotobuf-dev protobuf-compiler \
libgoogle-glog-dev libgflags-dev libgphoto2-dev libeigen3-dev libhdf5-dev wget libtbb-dev gcc-8 g++-8 llvm \
python3-venv libgirepository1.0-dev
# Install my project requirements
WORKDIR /venv
RUN python3 -m venv /venv
ENV PATH="/venv/bin:$PATH"
ADD requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
# Build OpenCV
WORKDIR /opencv
RUN wget https://github.com/opencv/opencv/archive/4.4.0.zip && mv 4.4.0.zip opencv.zip && unzip opencv.zip && rm opencv.zip
RUN wget https://github.com/opencv/opencv_contrib/archive/4.4.0.zip && mv 4.4.0.zip opencv_contrib.zip && unzip opencv_contrib.zip && rm opencv_contrib.zip
WORKDIR /opencv/opencv-4.4.0/build
ENV SITE_PACKAGES /venv/lib/python3.7/site-packages
ENV EXTRA_MODULES /opencv/opencv_contrib-4.4.0/modules
ENV CUDA_ARCH 7.5
ADD docker/build_opencv.sh .
RUN ./build_opencv.sh
# Stage 2: runtime
FROM nvidia/cuda:10.2-cudnn7-runtime-ubuntu18.04
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install build-essential cmake python3-venv
# Install OpenCV
COPY --from=requirements /opencv /opencv
WORKDIR /opencv/opencv-4.4.0/build
RUN make install && ldconfig
# build fails here and the rest is specific to my project so I've ommitted it
The build_opencv.sh script has this options:
#!/bin/bash
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_C_COMPILER=/usr/bin/gcc-8 \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=OFF \
-D INSTALL_C_EXAMPLES=OFF \
-D WITH_TBB=ON \
-D WITH_CUDA=ON \
-D BUILD_opencv_cudacodec=OFF \
-D ENABLE_FAST_MATH=1 \
-D CUDA_FAST_MATH=1 \
-D WITH_CUBLAS=1 \
-D WITH_V4L=ON \
-D WITH_QT=OFF \
-D WITH_OPENGL=ON \
-D WITH_GSTREAMER=ON \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D OPENCV_PC_FILE_NAME=opencv.pc \
-D OPENCV_ENABLE_NONFREE=ON \
-D OPENCV_PYTHON3_INSTALL_PATH=$SITE_PACKAGES \
-D OPENCV_EXTRA_MODULES_PATH=$EXTRA_MODULES \
-D PYTHON_EXECUTABLE=/usr/bin/python3 \
-D WITH_CUDNN=ON \
-D OPENCV_DNN_CUDA=ON \
-D CUDA_ARCH_BIN=$CUDA_ARCH \
-D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.2 \
-D WITH_GTK_2_X=OFF \
-D BUILD_EXAMPLES=OFF ..
make -j16
You need at least numpy in your requirements.txt file.
In order to reproduce the issue, a minimal setup would have this structure:
- docker
- Dockerfile
- build_opencv.sh
- requirements.txt
Build using from the root of the build context:
docker build -t opencvmultistage:latest -f docker/Dockerfile .
Am I doing something wrong? Maybe CMake has some weird cache that I'm not copying to the new image and makes the build fail?
For the sake of clarity, if I add make install in the build_opencv.sh script it works, but I have OpenCV installed in the build context and not the runtime, which is not what I pretend to do. make install runs in the same directory, and the same files should be present, so I don't really know what's going on.
It is simpler to run cmake & make and make install in the same stage and then copy the install folders. It will allow to not have any build tools like cmake or build-essential in the final docker image.
We will use a custom CMAKE_INSTALL_PREFIX so that OpenCV binaries are installed to a directory and we can copy it straight to the next stage. Using a custom prefix will avoid having to copy CUDA installation or development libraries no longer required. Then we will run ldconfig on that directory to link the libraries as usual.
Modify the build script to use a custom CMAKE_INSTALL_PREFIX:
mkdir /prefix
cmake -D CMAKE_BUILD_TYPE=RELEASE \
# all compiler flags...
-D CMAKE_INSTALL_PREFIX=/prefix
Modifying the Dockerfile
to run make install in stage 1
# Stage 1: Build
FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04 AS requirements
...
ADD build_opencv.sh .
RUN ./build_opencv.sh && make install
copy the installation in stage 2
# Stage 2: runtime
FROM nvidia/cuda:10.2-cudnn7-runtime-ubuntu18.04
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install build-essential python3-venv
# Install OpenCV
COPY --from=requirements /prefix /prefix
COPY --from=requirements /venv /venv
ENV PATH="/venv/bin:$PATH"
RUN ldconfig /prefix
I have the following Dockerfile:
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y \
build-essential \
ca-certificates \
gcc \
git \
libpq-dev \
make \
python-pip \
python2.7 \
python2.7-dev \
ssh \
&& apt-get autoremove \
&& apt-get clean
ARG SSH_PRIVATE_KEY
RUN mkdir /root/.ssh/
RUN echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa
RUN touch /root/.ssh/known_hosts
RUN ssh-keyscan http://bitbuckrt.org >> /root/.ssh/known_hosts
RUN pip install git+ssh://git#bitbucket.org/repo.git
I am building the Docker image from this Dockerfile using the following command:
docker build -t myimage:v1 --build-arg SSH_PRIVATE_KEY="ssh-rsa jkdfjgklfsgnkljgxdfeheflkfkl/hkskkdhgtgshshsh/... " .
However, it is not building my image. I get the following error:
"docker build" requires exactly 1 argument.
What could be the issue? How to correctly pass the SSH_PRIVATE_KEY while building the image?
Assign your private key to a bash variable and use it in the command. Reading the key from a file while assigning to the variable is the safest. The special chars inside the key might be screwing the command if it has, say, a quotation mark. Ex:
PKEY=$(<key.txt)
docker build -t myimage:v1 --build-arg SSH_PRIVATE_KEY=$PKEY .