Docker command fails during build, but succeeds while executed within running container - docker

the command :
docker build -t nginx-ubuntu .
whith the Dockerfile below :
FROM ubuntu:12.10
RUN apt-get update
RUN apt-get -y install libpcre3 libssl-dev
RUN apt-get -y install libpcre3-dev
RUN apt-get -y install wget zip gcc
RUN wget http://nginx.org/download/nginx-1.4.1.tar.gz
RUN gunzip nginx-1.4.1.tar.gz
RUN tar -xf nginx-1.4.1.tar
RUN wget --no-check-certificate https://github.com/max-l/nginx_accept_language_module/archive/master.zip
RUN unzip master
RUN cd nginx-1.4.1
RUN ./configure --add-module=../nginx_accept_language_module-master --with-http_ssl_module --with-pcre=/lib/x86_64-linux-gnu --with-openssl=/usr/lib/x86_64-linux-gnu
Fails at the last line (./configure ...)
If I remove the last line and run a bash in the container, and
execute the last line manually, it works.
I would expect that whatever command runs successfully within a container should work
when the command is appended in the Dockerfile (prefixed by RUN)
am I missing something ?

The pwd is not persistent across RUN commands. You need to cd and configure within the same RUN.
This Dockerfile works fine:
FROM ubuntu:12.10
RUN apt-get update
RUN apt-get -y install libpcre3 libssl-dev
RUN apt-get -y install libpcre3-dev
RUN apt-get -y install wget zip gcc
RUN wget http://nginx.org/download/nginx-1.4.1.tar.gz
RUN gunzip nginx-1.4.1.tar.gz
RUN tar -xf nginx-1.4.1.tar
RUN wget --no-check-certificate https://github.com/max-l/nginx_accept_language_module/archive/master.zip
RUN unzip master
RUN cd nginx-1.4.1 && ./configure --add-module=../nginx_accept_language_module-master --with-http_ssl_module --with-pcre=/lib/x86_64-linux-gnu --with-openssl=/usr/lib/x86_64-linux-gnu

As an alternative to #creak's answer, you can change the default working directory in a Dockerfile with the WORKDIR command:
FROM ubuntu:12.10
# Run update & install together so that the docker cache doesn't
# contain an out-of-date APT cache (leading to 404's when installing
# packages)
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install libpcre3 libssl-dev libpcre3-dev wget zip gcc
ADD http://nginx.org/download/nginx-1.4.1.tar.gz nginx-1.4.1.tar.gz
RUN tar -zxf nginx-1.4.1.tar.gz
RUN wget --no-check-certificate https://github.com/max-l/nginx_accept_language_module/archive/master.zip
RUN unzip master
WORKDIR nginx-1.4.1
RUN ./configure --add-module=../nginx_accept_language_module-master --with-http_ssl_module --with-pcre=/lib/x86_64-linux-gnu --with-openssl=/usr/lib/x86_64-linux-gnu
This also affects the default directory when you use docker run (overridden by the -w switch).

When I called wget or tar with RUN I needed to specify a path, it looks like ADD is the correct approach if you want to use WORKDIR as it's path instead. So either of the below resolved my issue.
RUN
RUN wget http://nginx.org/download/nginx-1.4.1.tar.gz -P ~/directory
RUN tar -zxf ~/directory/nginx-1.4.1.tar.gz -C ~/directory
or
ADD
WORKDIR ~/directory
ADD http://nginx.org/download/nginx-1.4.1.tar.gz nginx-1.4.1.tar.gz
RUN tar -zxf nginx-1.4.1.tar.gz
The second approach prevented me from needing to install wget in the container.

Another way of doing this using the \ operator to start a new line and proceed with an additional command
Example
RUN cd /Desktop \
cd Work \
pwd

Related

Can't compile Rust project in conda environment in Docker

I'm trying to build the following sightglass benchmarking suite/ Dockerfile:
FROM ubuntu:22.04
RUN echo 'APT::Install-Suggests "0";' >> /etc/apt/apt.conf.d/00-docker
RUN echo 'APT::Install-Recommends "0";' >> /etc/apt/apt.conf.d/00-docker
RUN DEBIAN_FRONTEND=noninteractive \
apt-get update \
&& apt-get install -y python3 \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /usr/src
ADD rust-benchmark rust-benchmark
WORKDIR /usr/src/rust-benchmark
RUN apt update --yes
RUN apt install clang lldb lld wget curl git xz-utils bzip2 --yes
RUN apt-get install --reinstall ca-certificates --yes
RUN apt-get install libgl1-mesa-glx libegl1-mesa libxrandr2 libxrandr2 libxss1 libxcursor1 libxcomposite1 libasound2 libxi6 libxtst6 -y
RUN mkdir /usr/local/share/ca-certificates/cacert.org
RUN wget -P /usr/local/share/ca-certificates/cacert.org http://www.cacert.org/certs/root.crt http://www.cacert.org/certs/class3.crt
RUN update-ca-certificates
RUN git config --global http.sslCAinfo /etc/ssl/certs/ca-certificates.crt
RUN wget https://repo.anaconda.com/archive/Anaconda3-2022.10-Linux-x86_64.sh --no-check-certificate
RUN cd / && find . -name cargo
RUN chmod +x Anaconda3-2022.10-Linux-x86_64.sh
RUN yes yes | ./Anaconda3-2022.10-Linux-x86_64.sh
RUN rm Anaconda3-2022.10-Linux-x86_64.sh
RUN echo "export PATH=./yes/bin:$PATH" >> ~/.bashrc
ENV CONDA ./yes/bin/
ENV PATH="${CONDA}:${PATH}"
RUN ln -s ./yes/bin/conda /usr/local/bin/conda
RUN eval $(conda shell.bash hook)
RUN conda init bash
RUN conda update --all
RUN cd / && find . -name cargo
RUN conda create -c conda-forge -n rustenv rust
RUN activate rustenv
SHELL ["./yes/bin/conda", "run", "-n", "rustenv", "/bin/bash", "-c"]
RUN rustc --version
ENV GIT_SSL_NO_VERIFY=1
RUN git clone https://github.com/emscripten-core/emsdk.git
RUN cd emsdk && git pull
RUN chmod +x ./emsdk/emsdk
RUN ./emsdk/emsdk install latest
RUN ./emsdk/emsdk activate latest
RUN chmod +x ./emsdk/emsdk_env.sh
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN cd emsdk && source ./emsdk_env.sh
RUN ./emsdk/emsdk_env.sh
ENV EMSDK ./emsdk
ENV EMSCRIPTEN=${EMSDK}/emscripten/sdk
ENV EM_DATA ${EMSDK}/.data
ENV EM_CONFIG ${EMSDK}/.emscripten
ENV EM_CACHE ${EM_DATA}/cache
ENV EM_PORTS ${EM_DATA}/ports
ENV PATH="${EMSDK}:${EMSDK}/emscripten/sdk:${EMSDK}/llvm/clang/bin:${EMSDK}/node/current/bin:${EMSDK}/binaryen/bin:${PATH}"
RUN curl https://sh.rustup.rs -ksSf | sh -s -- -y
RUN chmod +x $HOME/.cargo/env
RUN $HOME/.cargo/env
ENV RUST ~/.cargo/bin
ENV PATH="${RUST}:${PATH}"
RUN rustup default nightly
RUN rustup target add wasm32-wasi --toolchain nightly
RUN ./yes/envs/rustenv/bin/cargo build --release --target wasm32-wasi
RUN cp target/wasm32-wasi/release/bls-381-wasm-benchmark.wasm /benchmark.wasm
The build process always aborts on the compile step with the following error:
error[E0463]: can't find crate for `core`
|
= note: the `wasm32-wasi` target may not be installed
= help: consider downloading the target with `rustup target add wasm32-wasi`
error[E0463]: can't find crate for `compiler_builtins`
My full setup can be found here: https://github.com/achimcc/arkworks-wasmtime-benchmarks/tree/main/benchmarks/bls12-381
It seems you are compiling something whose target is wasm32-wasi.
Rust can compile source codes for different "targets", but only few of them was enabled by default.
To install the wasm32-wasi target, you can run this command:
rustup target add wasm32-wasi
Any other questions about compling or environments, feel easy to comment here.

CD command is not working inside a docker container in ubuntu box :18.04(vagrant) [duplicate]

the command :
docker build -t nginx-ubuntu .
whith the Dockerfile below :
FROM ubuntu:12.10
RUN apt-get update
RUN apt-get -y install libpcre3 libssl-dev
RUN apt-get -y install libpcre3-dev
RUN apt-get -y install wget zip gcc
RUN wget http://nginx.org/download/nginx-1.4.1.tar.gz
RUN gunzip nginx-1.4.1.tar.gz
RUN tar -xf nginx-1.4.1.tar
RUN wget --no-check-certificate https://github.com/max-l/nginx_accept_language_module/archive/master.zip
RUN unzip master
RUN cd nginx-1.4.1
RUN ./configure --add-module=../nginx_accept_language_module-master --with-http_ssl_module --with-pcre=/lib/x86_64-linux-gnu --with-openssl=/usr/lib/x86_64-linux-gnu
Fails at the last line (./configure ...)
If I remove the last line and run a bash in the container, and
execute the last line manually, it works.
I would expect that whatever command runs successfully within a container should work
when the command is appended in the Dockerfile (prefixed by RUN)
am I missing something ?
The pwd is not persistent across RUN commands. You need to cd and configure within the same RUN.
This Dockerfile works fine:
FROM ubuntu:12.10
RUN apt-get update
RUN apt-get -y install libpcre3 libssl-dev
RUN apt-get -y install libpcre3-dev
RUN apt-get -y install wget zip gcc
RUN wget http://nginx.org/download/nginx-1.4.1.tar.gz
RUN gunzip nginx-1.4.1.tar.gz
RUN tar -xf nginx-1.4.1.tar
RUN wget --no-check-certificate https://github.com/max-l/nginx_accept_language_module/archive/master.zip
RUN unzip master
RUN cd nginx-1.4.1 && ./configure --add-module=../nginx_accept_language_module-master --with-http_ssl_module --with-pcre=/lib/x86_64-linux-gnu --with-openssl=/usr/lib/x86_64-linux-gnu
As an alternative to #creak's answer, you can change the default working directory in a Dockerfile with the WORKDIR command:
FROM ubuntu:12.10
# Run update & install together so that the docker cache doesn't
# contain an out-of-date APT cache (leading to 404's when installing
# packages)
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install libpcre3 libssl-dev libpcre3-dev wget zip gcc
ADD http://nginx.org/download/nginx-1.4.1.tar.gz nginx-1.4.1.tar.gz
RUN tar -zxf nginx-1.4.1.tar.gz
RUN wget --no-check-certificate https://github.com/max-l/nginx_accept_language_module/archive/master.zip
RUN unzip master
WORKDIR nginx-1.4.1
RUN ./configure --add-module=../nginx_accept_language_module-master --with-http_ssl_module --with-pcre=/lib/x86_64-linux-gnu --with-openssl=/usr/lib/x86_64-linux-gnu
This also affects the default directory when you use docker run (overridden by the -w switch).
When I called wget or tar with RUN I needed to specify a path, it looks like ADD is the correct approach if you want to use WORKDIR as it's path instead. So either of the below resolved my issue.
RUN
RUN wget http://nginx.org/download/nginx-1.4.1.tar.gz -P ~/directory
RUN tar -zxf ~/directory/nginx-1.4.1.tar.gz -C ~/directory
or
ADD
WORKDIR ~/directory
ADD http://nginx.org/download/nginx-1.4.1.tar.gz nginx-1.4.1.tar.gz
RUN tar -zxf nginx-1.4.1.tar.gz
The second approach prevented me from needing to install wget in the container.
Another way of doing this using the \ operator to start a new line and proceed with an additional command
Example
RUN cd /Desktop \
cd Work \
pwd

OpenCV Docker multistage build - cannot install prebuilt source

I'm trying to build a Docker image including a very particular configuration of OpenCV with CUDA and GPU support.
The build succeeds, and if I make install it from the same context that built the image, it works with no problems.
The problem happens when I try to use a multi stage build, to avoid keeping all the dependencies needed to build OpenCV. Before you continue reading, what follows might actually be an XY problem, if you have a better solution on how to copy OpenCV build artifacts (including Python bindings!) in a Docker multistage build, that is my actual intent.
Now for my attempted solution and the struggle I have:
I run COPY --from=requirements /opencv /opencv and it works and it apparently copies everything in the right path (I checked the filesystem). But, when I run from the build folder make install, I get this CMake error:
CMake Error: The source directory "" does not exist.
Specify --help for usage, or press the help button on the CMake GUI.
Makefile:2724: recipe for target 'cmake_check_build_system' failed
make: *** [cmake_check_build_system] Error 1
Again, the same command, from the same folder, but without multistage build, works.
Here is my Dockerfile:
# Stage 1: Build
FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04 AS requirements
# Install dependencies
RUN echo "deb http://es.archive.ubuntu.com/ubuntu eoan main universe" | tee -a /etc/apt/sources.list
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install build-essential cmake unzip pkg-config libjpeg-dev libpng-dev libtiff-dev libavcodec-dev \
libavformat-dev libswscale-dev libv4l-dev libxvidcore-dev libx264-dev libgtk-3-dev libatlas-base-dev \
gfortran python3-dev libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libxvidcore-dev x264 \
libx264-dev libfaac-dev libmp3lame-dev libtheora-dev libfaac-dev libmp3lame-dev libvorbis-dev \
libjpeg-dev libpng-dev libtiff-dev git python3-pip libtbb-dev libprotobuf-dev protobuf-compiler \
libgoogle-glog-dev libgflags-dev libgphoto2-dev libeigen3-dev libhdf5-dev wget libtbb-dev gcc-8 g++-8 llvm \
python3-venv libgirepository1.0-dev
# Install my project requirements
WORKDIR /venv
RUN python3 -m venv /venv
ENV PATH="/venv/bin:$PATH"
ADD requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
# Build OpenCV
WORKDIR /opencv
RUN wget https://github.com/opencv/opencv/archive/4.4.0.zip && mv 4.4.0.zip opencv.zip && unzip opencv.zip && rm opencv.zip
RUN wget https://github.com/opencv/opencv_contrib/archive/4.4.0.zip && mv 4.4.0.zip opencv_contrib.zip && unzip opencv_contrib.zip && rm opencv_contrib.zip
WORKDIR /opencv/opencv-4.4.0/build
ENV SITE_PACKAGES /venv/lib/python3.7/site-packages
ENV EXTRA_MODULES /opencv/opencv_contrib-4.4.0/modules
ENV CUDA_ARCH 7.5
ADD docker/build_opencv.sh .
RUN ./build_opencv.sh
# Stage 2: runtime
FROM nvidia/cuda:10.2-cudnn7-runtime-ubuntu18.04
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install build-essential cmake python3-venv
# Install OpenCV
COPY --from=requirements /opencv /opencv
WORKDIR /opencv/opencv-4.4.0/build
RUN make install && ldconfig
# build fails here and the rest is specific to my project so I've ommitted it
The build_opencv.sh script has this options:
#!/bin/bash
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_C_COMPILER=/usr/bin/gcc-8 \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=OFF \
-D INSTALL_C_EXAMPLES=OFF \
-D WITH_TBB=ON \
-D WITH_CUDA=ON \
-D BUILD_opencv_cudacodec=OFF \
-D ENABLE_FAST_MATH=1 \
-D CUDA_FAST_MATH=1 \
-D WITH_CUBLAS=1 \
-D WITH_V4L=ON \
-D WITH_QT=OFF \
-D WITH_OPENGL=ON \
-D WITH_GSTREAMER=ON \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D OPENCV_PC_FILE_NAME=opencv.pc \
-D OPENCV_ENABLE_NONFREE=ON \
-D OPENCV_PYTHON3_INSTALL_PATH=$SITE_PACKAGES \
-D OPENCV_EXTRA_MODULES_PATH=$EXTRA_MODULES \
-D PYTHON_EXECUTABLE=/usr/bin/python3 \
-D WITH_CUDNN=ON \
-D OPENCV_DNN_CUDA=ON \
-D CUDA_ARCH_BIN=$CUDA_ARCH \
-D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.2 \
-D WITH_GTK_2_X=OFF \
-D BUILD_EXAMPLES=OFF ..
make -j16
You need at least numpy in your requirements.txt file.
In order to reproduce the issue, a minimal setup would have this structure:
- docker
- Dockerfile
- build_opencv.sh
- requirements.txt
Build using from the root of the build context:
docker build -t opencvmultistage:latest -f docker/Dockerfile .
Am I doing something wrong? Maybe CMake has some weird cache that I'm not copying to the new image and makes the build fail?
For the sake of clarity, if I add make install in the build_opencv.sh script it works, but I have OpenCV installed in the build context and not the runtime, which is not what I pretend to do. make install runs in the same directory, and the same files should be present, so I don't really know what's going on.
It is simpler to run cmake & make and make install in the same stage and then copy the install folders. It will allow to not have any build tools like cmake or build-essential in the final docker image.
We will use a custom CMAKE_INSTALL_PREFIX so that OpenCV binaries are installed to a directory and we can copy it straight to the next stage. Using a custom prefix will avoid having to copy CUDA installation or development libraries no longer required. Then we will run ldconfig on that directory to link the libraries as usual.
Modify the build script to use a custom CMAKE_INSTALL_PREFIX:
mkdir /prefix
cmake -D CMAKE_BUILD_TYPE=RELEASE \
# all compiler flags...
-D CMAKE_INSTALL_PREFIX=/prefix
Modifying the Dockerfile
to run make install in stage 1
# Stage 1: Build
FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04 AS requirements
...
ADD build_opencv.sh .
RUN ./build_opencv.sh && make install
copy the installation in stage 2
# Stage 2: runtime
FROM nvidia/cuda:10.2-cudnn7-runtime-ubuntu18.04
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install build-essential python3-venv
# Install OpenCV
COPY --from=requirements /prefix /prefix
COPY --from=requirements /venv /venv
ENV PATH="/venv/bin:$PATH"
RUN ldconfig /prefix

Set of artifacts that needs to be copied for Docker multistage builds with yum

I'm trying to build a multistage docker image from centos
FROM centos as python-base
RUN yum install -y wget \
tar \
make \
gcc \
gcc-c++ \
zlib \
zlib-devel \
libffi-devel \
openssl-devel \
&& yum clean all
WORKDIR /usr/src/
RUN wget https://www.python.org/ftp/python/3.7.0/Python-3.7.0.tgz
RUN tar xzf Python-3.7.0.tgz
WORKDIR /usr/src/Python-3.7.0
RUN ./configure --enable-optimizations
RUN make altinstall
RUN python3.7 -V
#=====================================================================================
FROM centos:cs as python37
COPY --from=python-base /usr/local/lib/python3.7 /usr/local/lib/python3.7
COPY --from=python-base /usr/local/bin/pip3.7 /usr/local/bin/pip3.7
COPY --from=python-base /usr/local/bin/python3.7 /usr/local/bin/python3.7
RUN ln -s /usr/local/bin/pip3.7 /usr/local/bin/pip
RUN ln -s /usr/local/bin/python3.7 /usr/local/bin/python
As show above, I've build python37 from the python-base stage. Here, I've copied the required artifacts from python-base to python37 stage.
FROM centos as httpd-base
RUN yum -y groupinstall "Development tools"\
httpd-2.4.6-88.el7.centos.x86_64 \
&& yum clean all
So my question is, what is the set of artifacts that needs to be copied from httpd-base stage to build an image that has httpd and not all the developments tools that is required only during installation.
Any best practices in this regard is also appreciated.
Thanks in advance.

Install sdkman in docker image

Getting error while installing SDKMAN! in Ubuntu 16.04 docker image.
FROM ubuntu:16.04
RUN apt-get update
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get -qq -y install curl
RUN curl -s https://get.sdkman.io | bash
RUN chmod a+x "$HOME/.sdkman/bin/sdkman-init.sh"
RUN source "$HOME/.sdkman/bin/sdkman-init.sh"
make sure you have curl, wget, unzip & zip. With them I am able to install Sdkman successfully. Following is my Docker content
FROM ubuntu:18.04
RUN apt-get update
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get -qq -y install curl wget unzip zip
RUN curl -s "https://get.sdkman.io" | bash
RUN source "$HOME/.sdkman/bin/sdkman-init.sh"
TL;DR
Install unzip & zip, which means change
RUN apt-get -qq -y install curl
to
RUN apt-get -qq -y install curl unzip zip
or better
RUN apt-get -qq -y install \
curl \
unzip \
zip
Explanation
When you try to build the Dockerfile, you will get
.....
Step 5/6 : RUN curl -s https://get.sdkman.io | bash
---> Running in 1ce678a59561
--- SDKMAN LOGO ---
Now attempting installation...
Looking for a previous installation of SDKMAN...
Looking for unzip...
Not found.
======================================================================================================
Please install unzip on your system using your favourite package manager.
Restart after installing unzip.
======================================================================================================
Removing intermediate container 1ce678a59561
---> 22211eafd50c
Step 6/6 : RUN source "$HOME/.sdkman/bin/sdkman-init.sh"
---> Running in 1c5cb7d79ef0
/bin/sh: /root/.sdkman/bin/sdkman-init.sh: No such file or directory
The command '/bin/sh -c source "$HOME/.sdkman/bin/sdkman-init.sh"' returned a non-zero code: 1
What you need to do is written just there. This part:
======================================================================================================
Please install unzip on your system using your favourite package manager.
Restart after installing unzip.
======================================================================================================
When you install unzip, you get the same error with zip. After installing it, everything works fine.
So, read your logs/command output. :-)
*P.S. It would be better if curl -s https://get.sdkman.io | bash exited with non-zero code. This way it fails on the next command. But that is not a thing you can fix ;) *
It looks like the sdkman install failed.
When I ran your code above it complained about missing the unzip and zip packages.
After satisfying the dependencies, you'll also need to mark the init script as executable with:
chmod a+x "$HOME/.sdkman/bin/sdkman-init.sh"
So your Dockerfile should look something like:
FROM ubuntu:16.04
RUN apt-get update
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get -q -y install curl zip unzip
RUN curl -s https://get.sdkman.io | bash
RUN chmod a+x "$HOME/.sdkman/bin/sdkman-init.sh"
RUN source "$HOME/.sdkman/bin/sdkman-init.sh"
P.S: Beaten to the punch!
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get -qq -y install \
curl \
unzip \
zip
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get -qq -y install curl
RUN curl -s https://get.sdkman.io | bash
RUN chmod a+x "$HOME/.sdkman/bin/sdkman-init.sh"
RUN source "$HOME/.sdkman/bin/sdkman-init.sh"
docker pull kubile/ubuntu-sdkman:23.04
You can try this image.
This Dockerfile seems to work with versions current as of February 2023:
FROM ubuntu:22.04
RUN apt-get update && apt-get install -y curl zip unzip
RUN curl -s "https://get.sdkman.io" | bash
# this SHELL command is needed to allow using source
SHELL ["/bin/bash", "-c"]
# seems you need to put 'sdk install...'' lines in same RUN command as 'source...'.
RUN source "/root/.sdkman/bin/sdkman-init.sh" \
&& sdk install java 19.0.2-tem \
&& sdk install sbt 1.8.2 \
&& sdk install scala 2.13.10

Resources