Problem installing packages in multi-stage Dockerfile in the final stage - docker

I want to create a minimal docker image.
For that purpose I am using the following multi-stage build dockerfile.
FROM python:3.9-slim as base
ENV LANG=C.UTF-8 \
LC_ALL=C.UTF-8 \
PYTHONDONTWRITEBYTECODE=1 \
PYTHONFAULTHANDLER=1 \
PYTHONHASHSEED=random \
PYTHONUNBUFFERED=1
WORKDIR /app
FROM base as builder
ENV PIP_DEFAULT_TIMEOUT=100 \
PIP_DISABLE_PIP_VERSION_CHECK=1 \
PIP_NO_CACHE_DIR=1 \
POETRY_VERSION=1.1.13
COPY pyproject.toml poetry.lock ./
RUN apt-get update && \
apt-get install make build-essential libssl-dev zlib1g-dev \
libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \
libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev \
libffi-dev liblzma-dev python3.9-venv --yes && \
pip install "poetry==$POETRY_VERSION" && \
python -m venv /venv && \
poetry export -f requirements.txt | /venv/bin/pip install -r /dev/stdin
COPY . /app
RUN poetry build && /venv/bin/pip install dist/*.whl
FROM base as final
ENV PATH=/venv/bin:$PATH
COPY --from=builder /venv /venv
RUN apt-get update && apt-get install -y procps curl
# for prometheus
EXPOSE 9090
CMD ["my_command"]
However, no matter where I put the final install command in the final stage the commands are not found in the final image.
RUN apt-get update && apt-get install -y procps curl
I have tried putting it before and after the COPY and ENV and still nothing...
Finally, I added another stage between base and builder just to run this command and then everything works fine.
It's bugging me why this would be the case though. Any ideas what's wrong with the dockerfile above?

Related

Docker: COPY failed: stat <file>: file does not exist

I am trying to copy a file into my docker container but the command fails. The file is in the same directory as the Dockerfile, so I don't understand the reason for the error.
I'd appreciate any help or advice. Thanks beforehand.
This is the code:
FROM ubuntu:20.04 as builder
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update
RUN apt-get install -y \
build-essential \
cmake \
software-properties-common \
libopencv-dev
RUN add-apt-repository -y ppa:chrberger/libcluon
RUN apt-get update
RUN apt-get install -y libcluon
ADD . /opt/sources
WORKDIR /opt/sources
RUN mkdir build && \
cd build && \
cmake -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/tmp/dest .. && \
make && make install
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update --fix-missing
RUN apt-get install -y \
libopencv-core4.2 \
libopencv-imgproc4.2 \
libopencv-video4.2 \
libopencv-calib3d4.2 \
libopencv-features2d4.2 \
libopencv-objdetect4.2 \
libopencv-highgui4.2 \
libopencv-videoio4.2 \
libopencv-flann4.2 \
libopencv-dnn-dev \
python3-opencv
WORKDIR /usr/bin
COPY --from=builder /tmp/dest /usr
COPY --from=builder yolov3-tiny_obj.cfg /params
ENTRYPOINT ["/usr/bin/opendlv-perception-helloworld"]
Could you please clarify which line in your Dockerfile causes the error message?
Is the file you are trying to copy from your working directory yolov3-tiny_obj.cfg?
If that is the case, it fails because you specify to copy it from the builder stage.
The line should probably look like this:
COPY yolov3-tiny_obj.cfg /params

Error running python code via docker image

I have a python code which runs fine to pull data from an API but I am getting issues to run it via docker. I am using pyodbc to load data into SQLServer in my python code. Here is my dockerfile:
FROM python:3.9.2
RUN apt-get update -y && apt-get install -y --no-install-recommends \
unixodbc-dev \
unixodbc \
libpq-dev
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD ["python3","LoadAPI_data.py"]
After creating the docker image, when I am trying to run the docker image, I get the following error:
Error !!!!: ('01000', "[01000] [unixODBC][Driver Manager]Can't open
lib 'ODBC Driver 17 for SQL Server' : file not found (0)
(SQLDriverConnect)")
Can anyone let me know how do I get rid of this error?
I was able to get my code running by updating my dockerfile to run installation of SQL DB as well as python. Here is what my new dockerfile looks like.
FROM ubuntu:18.04
RUN apt-get update -y && \
apt-get install -y \
libpq-dev \
gcc \
python3-pip \
unixodbc-dev
RUN apt-get update && apt-get install -y \
curl apt-utils apt-transport-https debconf-utils gcc build-essential g++-5\
&& rm -rf /var/lib/apt/lists/*
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/ubuntu/18.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update
RUN ACCEPT_EULA=Y apt-get install -y --allow-unauthenticated msodbcsql17
RUN pip3 install pyodbc
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD ["python3","LoadAPI_data.py"]

Entrypoint not found when deployed to Fargate. Locally works

I have the following Dockerfile, currently working locally in my device:
FROM python:3.7-slim-buster
WORKDIR /app
COPY . /app
VOLUME /app
RUN chmod +x /app/cat/sitemap_download.py
COPY entrypoint.sh /app/entrypoint.sh
RUN chmod +x /app/entrypoint.sh
ARG VERSION=3.7.4
RUN apt update && \
apt install -y bash wget && \
wget -O /tmp/nordrepo.deb https://repo.nordvpn.com/deb/nordvpn/debian/pool/main/nordvpn-release_1.0.0_all.deb && \
apt install -y /tmp/nordrepo.deb && \
apt update && \
apt install -y nordvpn=$VERSION && \
apt remove -y wget nordvpn-release
RUN apt-get clean \
&& apt-get -y update
RUN apt-get -y install python3-dev \
python3-psycopg2 \
&& apt-get -y install build-essential
RUN pip install --upgrade pip
RUN pip install -r cat/requirements.txt
RUN pip install awscli
ENTRYPOINT ["sh", "-c", "./entrypoint.sh"]
But when I deploy it to Fargate, the container stops before reaching the steady state with:
sh: 1: ./entrypoint.sh: not found
Edit: Adding entrypoint.sh file for clarification:
#!/bin/env sh
# start process, but it should exit once the file is in S3
/app/cat/sitemap_download.py
# Once the process is done, we are good to scale down the service
aws ecs update-service --cluster cluster_name --region eu-west-1 --service service-name --desired-count 0
I have tried modifying ENTRYPOINT to use it as exec form, or with full path but always get the same issue. Any ideas on what am I doing wrong?
I've managed to fix it now.
Changing the Dockerfile to look as follows solves the issue:
COPY . /app
VOLUME /app
RUN chmod +x /app/cat/sitemap_download.py
COPY entrypoint.sh /app/entrypoint.sh
RUN chmod +x /app/entrypoint.sh
ARG VERSION=3.7.4
RUN apt update && \
apt install -y bash wget && \
wget -O /tmp/nordrepo.deb https://repo.nordvpn.com/deb/nordvpn/debian/pool/main/nordvpn-release_1.0.0_all.deb && \
apt install -y /tmp/nordrepo.deb && \
apt update && \
apt install -y nordvpn=$VERSION && \
apt remove -y wget nordvpn-release
RUN apt-get clean \
&& apt-get -y update
RUN apt-get -y install python3-dev \
python3-psycopg2 \
&& apt-get -y install build-essential
RUN pip install --upgrade pip
RUN pip install -r cat/requirements.txt
RUN pip install awscli
ENTRYPOINT ["/bin/bash"]
CMD ["./entrypoint.sh"]
I tried this after reading: What is the difference between CMD and ENTRYPOINT in a Dockerfile?
I believe this syntax fixes it because with entrypoint I'm indicating bash to be run at start, and then passing the script as parameter.

How to reduce multistage build duplicate steps time cost issue?

I have a go application, which depends on cgo. When build, it needs libsodium-dev, libzmq3-dev, libczmq-dev, and when run it also needs above three packages.
Currently, I use next multistage build: a golang build environment as the first stage & a debian slim as the second stage. But you could see the 3 packages installed for two times which waste time(Later I may have more such kinds of package added).
FROM golang:1.12.9-buster AS builder
WORKDIR /src/pigeon
COPY . .
RUN apt-get update && \
apt-get install -y --no-install-recommends libsodium-dev && \
apt-get install -y --no-install-recommends libzmq3-dev && \
apt-get install -y --no-install-recommends libczmq-dev && \
go build cmd/main/pgd.go
FROM debian:buster-slim
RUN apt-get update && \
apt-get install -y --no-install-recommends libsodium-dev && \
apt-get install -y --no-install-recommends libzmq3-dev && \
apt-get install -y --no-install-recommends libczmq-dev && \
apt-get install -y --no-install-recommends python3 && \
apt-get install -y --no-install-recommends python3-pip && \
pip3 install jinja2
WORKDIR /root/
RUN mkdir logger
COPY --from=builder /src/pigeon/pgd .
COPY --from=builder /src/pigeon/logger logger
CMD ["./pgd"]
Of course, I can give up multi-stage build, just use golang1.12.9-buster for build, and continue for run, but this will make final run image bigger (which is the advantage of multi-stage build).
Do I miss something or I had to make choice between above?
this is my take about your question:
FROM debian:buster-slim as base
RUN mkdir /debs /debs_tmp \
&& chmod 777 /debs /debs_tmp
WORKDIR /debs
RUN apt-get update \
&& apt-get install -y -d \
--no-install-recommends \
-o dir::cache::archives="/debs_tmp/" \
libsodium-dev \
libzmq3-dev \
libczmq-dev \
&& mv /debs_tmp/*.deb /debs \
&& rm -rf /debs_tmp \
&& apt-get install -y --no-install-recommends \
python3 \
python3-pip \
&& pip3 install jinja2 \
&& rm -rf /var/lib/apt/lists/*
##################
FROM golang:1.12.9-buster AS builder
COPY --from=base /debs /debs
WORKDIR /debs
RUN dpkg -i *.deb
WORKDIR /src/pigeon
COPY . .
RUN go build cmd/main/pgd.go
##################
FROM base
RUN rm -rf /debs
WORKDIR /root/
RUN mkdir logger
COPY --from=builder /src/pigeon/pgd .
COPY --from=builder /src/pigeon/logger logger
CMD ["./pgd"]
You can download the required packages in a temporary folder, move the debs in a new location and finally COPY the debs in the next stage. Finally you simply use the first image you've created.
BTW the containers will run as root. This might be an issue depending on what the software does, you might want to consider to use a user without "powers".
EDIT: sorry for the edits but I ran a couple of example locally and didn't have a go script ready.
At the COPY . . step, any time your source changes, the cache will bust and you will run all later steps again. You can reorder the steps to allow docker to cache the install of your dependencies. You can also join the apt-get install commands into one to reduce overhead of processing the package manager db.
FROM golang:1.12.9-buster AS builder
WORKDIR /src/pigeon
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
libsodium-dev \
libzmq3-dev \
libczmq-dev
COPY . .
RUN go build cmd/main/pgd.go
FROM debian:buster-slim
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
libsodium-dev \
libzmq3-dev \
libczmq-dev \
python3 \
python3-pip \
&& pip3 install jinja2
WORKDIR /root/
RUN mkdir logger
COPY --from=builder /src/pigeon/pgd .
COPY --from=builder /src/pigeon/logger logger
CMD ["./pgd"]
You will still install the packages twice, but now those installs are cached for future builds. The way to reuse the install of the libraries is to reorder the steps, installing the libraries in a common base image, and then install the go compiler on your build stage, but that will almost certainly be more overhead than installing libraries twice.
With BuildKit, you could share the apt cache between builds using an experimental syntax, but this requires that all builds use BuildKit (the syntax is not backwards compatible), and modifying docker's Debian image to preserve the apt package cache. From the BuildKit experimental documentation, there's the following example for apt:
# syntax = docker/dockerfile:experimental
FROM ubuntu
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN --mount=type=cache,target=/var/cache/apt --mount=type=cache,target=/var/lib/apt \
apt update && apt install -y gcc
https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md

global ARG variable was changed after FROM in multi stage build

Using global ARG variable in instructions like FROM, RUN
for example i wanna use ${CUDA_VERSION} ARG variable in FROM nvidia/cuda:${CUDA_VERSION}-devel-ubuntu${UBUNTU_VERSION} and libcudnn7=${CUDNN_VERSION}-1+cuda${CUDA_VERSION} in second build stage
but global ARG variable ${CUDA_VERSION} was changed that after pass FROM nvidia/cuda:${CUDA_VERSION}-devel-ubuntu${UBUNTU_VERSION} 9.0 to 9.0.176
in Ubuntu 18.04, Docker-CE 18.09.04
i was tried many things
Change ARG variable line position in build stage
Copy other ARG variable from original ${CUDA_VERSION} variable
Making .profile for environment variable in first build stage. and in second stage copy .profile file from first stage and apply it using source command
Using ENV variable(but ENV variable disappear when entered other build stage)
example dockerfile and result of build dockerfile as following
Dockerfile
ARG handler_file=handler.py
ARG handler_name=Handler
ARG HANDLER_DIR=/handler
ARG HANDLER_FILE=${HANDLER_DIR}/${handler_file}
ARG HANDLER_NAME=${handler_name}
# Global arguments for Nvidia-docker
ARG CUDA_VERSION=9.0
ARG CUDNN_VERSION=7.4.1.5
ARG UBUNTU_VERSION=16.04
# == MutiStage Build ==
# 1-Stage
FROM python:3.7-alpine
ARG HANDLER_DIR
ARG HANDLER_FILE
ARG HANDLER_NAME
ARG handler_file
ARG handler_name
ARG CUDA_VERSION
RUN echo "${CUDA_VERSION}"
RUN mkdir -p ${HANDLER_DIR}
WORKDIR ${HANDLER_DIR}
COPY . .
RUN touch ${HANDLER_DIR}/__init__.py
# 2-Stage
FROM nvidia/cuda:${CUDA_VERSION}-devel-ubuntu${UBUNTU_VERSION}
# For Nvidia-Docker
ARG CUDA_VERSION
ARG CUDNN_VERSION
RUN echo "${CUDA_VERSION}"
# Copy directory from 1-stage
ARG HANDLER_DIR
RUN mkdir -p ${HANDLER_DIR}
WORKDIR ${HANDLER_DIR}
COPY --from=0 ${HANDLER_DIR} .
RUN echo "/usr/local/cuda-${CUDA_VERSION}/extras/CUPTI/lib64" > /etc/ld.so.conf.d/cupti.conf
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
wget \
tar \
libgomp1 \
libcudnn7=${CUDNN_VERSION}-1+cuda${CUDA_VERSION} \
python \
python-dev \
python-numpy \
python-pip \
python-setuptools \
python3 \
python3-dev \
python3-numpy \
python3-pip \
python3-setuptools \
python3-tk \
libgtk2.0-dev \
${ADDITIONAL_PACKAGE} \
&& rm -rf /var/lib/apt/lists/*
ENV LD_LIBRARY_PATH /usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH
RUN pip3 --no-cache-dir install --upgrade \
pip setuptools
RUN pip3 install --upgrdae pip && \
pip3 install -r requirements.txt
Build Message
...
Step 9/33 : FROM python:3.7-alpine
---> 2caaa0e9feab
...
Step 16/33 : RUN echo "${CUDA_VERSION}"
---> Running in d057b0fd57a7
9.0
...
Step 21/33 : FROM nvidia/cuda:${CUDA_VERSION}-devel-ubuntu${UBUNTU_VERSION}
---> 2f9810b1b916
...
Step 24/33 : RUN echo "${CUDA_VERSION}"
---> Running in dc676c2a2992
9.0.176
...
Step 30/33 : RUN apt-get update && apt-get install -y --no-install-recommends build-essential wget tar libgomp1 libcudnn7=${CUDNN_VERSION}-1+cuda${CUDA_VERSION} python python-dev python-numpy python-pip python-setuptools python3 python3-dev python3-numpy python3-pip python3-setuptools python3-tk libgtk2.0-dev ${ADDITIONAL_PACKAGE} && rm -rf /var/lib/apt/lists/*
---> Running in 8518fb8d755c
...
E: Version '7.4.1.5-1+cuda9.0.176' for 'libcudnn7' was not found
The command '/bin/sh -c apt-get update && apt-get install -y --no-install-recommends build-essential wget tar libgomp1 libcudnn7=${CUDNN_VERSION}-1+cuda${CUDA_VERSION} python python-dev python-numpy python-pip python-setuptools python3 python3-dev python3-numpy python3-pip python3-setuptools python3-tk libgtk2.0-dev ${ADDITIONAL_PACKAGE} && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100
The expected result is docker file build successfully
but ARG variable changed causes the following error
E: Version '7.4.1.5-1+cuda9.0.176' for 'libcudnn7' was not found
i was resolve my problem as following
and i posted in issue
https://github.com/docker/for-linux/issues/713
# Global arguments for Nvidia-docker
ARG CUDA_VERSION=9.0
ARG CUDNN_VERSION=7.4.1.5
ARG UBUNTU_VERSION=16.04
ARG BACKUP=${CUDA_VERSION}
...
RUN echo "/usr/local/cuda-${BACKUP}/extras/CUPTI/lib64" > /etc/ld.so.conf.d/cupti.conf
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
wget \
tar \
libgomp1 \
libcudnn7=${CUDNN_VERSION}-1+cuda${BACKUP} \
python \
python-dev \
python-numpy \
python-pip \
python-setuptools \
python3 \
python3-dev \
python3-numpy \
python3-pip \
python3-setuptools \
python3-tk \
libgtk2.0-dev \
${ADDITIONAL_PACKAGE} \
&& rm -rf /var/lib/apt/lists/*
...
Dockerfile
# Arguments for Nvidia-Docker
# all combination set in CUDA, cuDNN, Ubuntu is not Incompatible please check REFERENCE OF NVIDIA-DOCKER
# REFERENCE OF NVIDIA-DOCKER
# https://hub.docker.com/r/nvidia/cuda/
ARG handler_file=handler.py
ARG handler_name=Handler
ARG HANDLER_DIR=/handler
ARG HANDLER_FILE=${HANDLER_DIR}/${handler_file}
ARG HANDLER_NAME=${handler_name}
# Global arguments for Nvidia-docker
ARG CUDA_VERSION=9.0
ARG CUDNN_VERSION=7.4.1.5
ARG UBUNTU_VERSION=16.04
ARG BACKUP=${CUDA_VERSION}
# == MutiStage Build ==
# 1-Stage
# Get watcher - if watcher is uploaded on github, remove this line.
FROM python:3.7-alpine
ARG HANDLER_DIR
ARG HANDLER_FILE
ARG HANDLER_NAME
ARG handler_file
ARG handler_name
ARG BACKUP
ARG CUDA_VERSION
RUN echo "${CUDA_VERSION}"
RUN echo "${BACKUP}"
RUN mkdir -p ${HANDLER_DIR}
WORKDIR ${HANDLER_DIR}
COPY . .
RUN touch ${HANDLER_DIR}/__init__.py
# 2-Stage
FROM nvidia/cuda:${CUDA_VERSION}-devel-ubuntu${UBUNTU_VERSION}
ARG BACKUP
# For Nvidia-Docker
ARG CUDA_VERSION
ARG CUDNN_VERSION
RUN echo "${CUDA_VERSION}"
RUN echo "${BACKUP}"
# Copy directory from 0-stage
ARG HANDLER_DIR
RUN mkdir -p ${HANDLER_DIR}
WORKDIR ${HANDLER_DIR}
COPY --from=0 ${HANDLER_DIR} .
RUN echo "/usr/local/cuda-${BACKUP}/extras/CUPTI/lib64" > /etc/ld.so.conf.d/cupti.conf
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
wget \
tar \
libgomp1 \
libcudnn7=${CUDNN_VERSION}-1+cuda${BACKUP} \
python \
python-dev \
python-numpy \
python-pip \
python-setuptools \
python3 \
python3-dev \
python3-numpy \
python3-pip \
python3-setuptools \
python3-tk \
libgtk2.0-dev \
${ADDITIONAL_PACKAGE} \
&& rm -rf /var/lib/apt/lists/*
ENV LD_LIBRARY_PATH /usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH
RUN pip3 --no-cache-dir install --upgrade \
pip setuptools
RUN pip3 install --upgrdae pip && \
pip3 install -r requirements.txt

Resources