Docker image build fails on file add - docker

This is my Dockerfile:
FROM debian:latest
LABEL MAINTAINER DINESH
LABEL version="1.0"
LABEL description="First image with Dockerfile & DINESH."
RUN apt-get clean
RUN apt-get update
RUN apt-get install -qy git
RUN apt-get install -qy locales
RUN apt-get install -qy nano
RUN apt-get install -qy tmux
RUN apt-get install -qy wget
RUN apt-get install -qy python3
RUN apt-get install -qy python3-psycopg2
RUN apt-get install -qy python3-pystache
RUN apt-get install -qy python3-yaml
RUN apt-get -qy autoremove
# ** ERROR IS BELOW **
ADD .bashrc /root/.bashrc
ADD .profile /root/.profile
ADD app /app
RUN locale-gen C.UTF-8 && /usr/sbin/update-locale LANG=C.UTF-8
ENV PYTHONIOENCODING UTF-8
ENV PYTHONPATH /app/
When i run this command docker build -t myimage ., it is giving error below.
"Step 17/20 : ADD app /app
ADD failed: stat /var/lib/docker/tmp/docker-builder687980062/.bashrc: no such file or directory"
I gave permission the above give path but it is not resolved. Please let me know how I can solve it.

First please make sure file is existing in proper directory. as error suggesting no such file or directory
Please instead of ADD try using COPY working for me
COPY .bashrc /root/
COPY .profile /root/
also make file exist at source place and destination is proper.
Also as per best practices you can merge line and make a single command
RUN apt-get update -yq \
&& apt-get install -y python3-dev build-essential -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt \
&& apt-get purge -y --auto-remove gcc python3-dev build-essential

change to:
ADD .bashrc /root/
ADD .profile /root/
ADD app /
From documentation:
ADD src ... dest.
The dest is an absolute path, or a path relative
to WORKDIR, into which the source will be copied inside the
destination container.

Related

Error running python code via docker image

I have a python code which runs fine to pull data from an API but I am getting issues to run it via docker. I am using pyodbc to load data into SQLServer in my python code. Here is my dockerfile:
FROM python:3.9.2
RUN apt-get update -y && apt-get install -y --no-install-recommends \
unixodbc-dev \
unixodbc \
libpq-dev
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD ["python3","LoadAPI_data.py"]
After creating the docker image, when I am trying to run the docker image, I get the following error:
Error !!!!: ('01000', "[01000] [unixODBC][Driver Manager]Can't open
lib 'ODBC Driver 17 for SQL Server' : file not found (0)
(SQLDriverConnect)")
Can anyone let me know how do I get rid of this error?
I was able to get my code running by updating my dockerfile to run installation of SQL DB as well as python. Here is what my new dockerfile looks like.
FROM ubuntu:18.04
RUN apt-get update -y && \
apt-get install -y \
libpq-dev \
gcc \
python3-pip \
unixodbc-dev
RUN apt-get update && apt-get install -y \
curl apt-utils apt-transport-https debconf-utils gcc build-essential g++-5\
&& rm -rf /var/lib/apt/lists/*
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/ubuntu/18.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update
RUN ACCEPT_EULA=Y apt-get install -y --allow-unauthenticated msodbcsql17
RUN pip3 install pyodbc
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD ["python3","LoadAPI_data.py"]

Docker image size is coming up to 1.7 G for Ubuntu with Python packages

Following is my Dockerfile :-
FROM ubuntu:18.04 AS builder
RUN apt update -y
RUN apt install python3.8 -y && apt install python3-pip -y
RUN apt install build-essential automake pkg-config libtool libffi-dev libgmp-dev -y
RUN apt install libsecp256k1-dev -y
RUN apt install openjdk-8-jre -y
RUN apt install git -y
RUN apt install libkrb5-dev -y
RUN apt install vim -y
RUN mkdir /opt/app
RUN chown -R root:root /opt/app
COPY ["requirements.txt","/opt/app/requirements.txt"]
SHELL ["/bin/bash", "-c"]
WORKDIR /opt/app
RUN pip3 install -r requirements.txt && apt-get -y clean all
RUN mkdir /opt/app/
RUN chown -R root:root /opt/app/
RUN cd /opt/app/
RUN git clone -b master https://bitbucket.org/heroes/test.git
CMD ["bash","/opt/app/bin/connect.sh"]
Docker image is generating with an image file size of 1.7G. I need to have OpenJDK hence cannot use a standard python package as a base package. When I perform docker history , I can see 2 or 3 layers (installing packages above like Python3.8, OpenJDK and libsecp256k1-dev) taking up to 400MB to 500MB in size. Ubuntu as a base image takes only 64 MB however rest of size is taking by my dockerfile layers.
I believe I need to re-write the dockerfile in order to reduce the file size which I did but nothing happened concrete.
Please assist me on reducing the image less than 1 GB at least.
[Update]
Below is my updated Dockerfile:-
FROM ubuntu:18.04 AS builder
WORKDIR /opt/app
COPY requirements.txt /opt/app/aws/requirements.txt
RUN mkdir -p /opt/app/aws \
&& apt-get update -yq \
&& apt-get install -y python3.8 python3-pip openjdk-8-jre -yq && apt-get -y clean all \
&& chown -R root:root /opt/app && cd /opt/app/aws && pip3 install -r requirements.txt
FROM alpine
COPY --from=builder /opt/app /opt/app
SHELL ["/bin/bash", "-c"]
CMD ["bash","/opt/app/aws/bin/connector/connect.sh"]
Screenshot of image size:-
After removing unwanted libraries like git, etc and using the multi-stage build, the image is now approx 1.7 GB which I believe is a lot. Any suggestion to improve this?
You have multiple issues going on.
First, each of your RUN apt install is increasing your image size, you should have them all in the same RUN stage, and at the end of the stage, delete all cached apt files.
Second, you're installing unnecessary stuff. Why would you need vim and git for instance? Why are you installing build-essential and other build-related stuff if you're not building anything?
Third, it seems you tried to do a multi-stage build but ended up adding everything to the same image. Read up on python multi-stage builds.
If we consider best practices instead of multiple RUN use single RUN.
For example
RUN apt-get update -yq \
&& apt-get install -y python3-dev build-essential -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt \
&& apt-get purge -y --auto-remove gcc python3-dev build-essential
you can use multistage builds if you don't require git in your final image you can remove in final stage
Also if possible you can use alpine version also.
Try disabling recommended packages of APT with --no-install-recommends, you can read more about it from here.
Now the image is smaller:
FROM ubuntu:18.04 AS builder
RUN apt update -y
RUN apt install python3-pip -y
RUN apt install build-essential automake pkg-config libtool libffi-dev libgmp-dev -y
RUN apt install libsecp256k1-dev -y
RUN apt install openjdk-8-jre-headless -y
RUN apt install git -y
RUN apt install libkrb5-dev -y
RUN apt install vim -y
RUN mkdir /opt/app
RUN chown -R root:root /opt/app
COPY ["requirements.txt","/opt/app/requirements.txt"]
SHELL ["/bin/bash", "-c"]
WORKDIR /opt/app
RUN pip3 install -r requirements.txt && apt-get -y clean all
RUN mkdir /opt/app/
RUN chown -R root:root /opt/app/
RUN cd /opt/app/
RUN git clone -b master https://bitbucket.org/heroes/test.git
CMD ["bash","/opt/app/bin/connect.sh"]

How to reduce multistage build duplicate steps time cost issue?

I have a go application, which depends on cgo. When build, it needs libsodium-dev, libzmq3-dev, libczmq-dev, and when run it also needs above three packages.
Currently, I use next multistage build: a golang build environment as the first stage & a debian slim as the second stage. But you could see the 3 packages installed for two times which waste time(Later I may have more such kinds of package added).
FROM golang:1.12.9-buster AS builder
WORKDIR /src/pigeon
COPY . .
RUN apt-get update && \
apt-get install -y --no-install-recommends libsodium-dev && \
apt-get install -y --no-install-recommends libzmq3-dev && \
apt-get install -y --no-install-recommends libczmq-dev && \
go build cmd/main/pgd.go
FROM debian:buster-slim
RUN apt-get update && \
apt-get install -y --no-install-recommends libsodium-dev && \
apt-get install -y --no-install-recommends libzmq3-dev && \
apt-get install -y --no-install-recommends libczmq-dev && \
apt-get install -y --no-install-recommends python3 && \
apt-get install -y --no-install-recommends python3-pip && \
pip3 install jinja2
WORKDIR /root/
RUN mkdir logger
COPY --from=builder /src/pigeon/pgd .
COPY --from=builder /src/pigeon/logger logger
CMD ["./pgd"]
Of course, I can give up multi-stage build, just use golang1.12.9-buster for build, and continue for run, but this will make final run image bigger (which is the advantage of multi-stage build).
Do I miss something or I had to make choice between above?
this is my take about your question:
FROM debian:buster-slim as base
RUN mkdir /debs /debs_tmp \
&& chmod 777 /debs /debs_tmp
WORKDIR /debs
RUN apt-get update \
&& apt-get install -y -d \
--no-install-recommends \
-o dir::cache::archives="/debs_tmp/" \
libsodium-dev \
libzmq3-dev \
libczmq-dev \
&& mv /debs_tmp/*.deb /debs \
&& rm -rf /debs_tmp \
&& apt-get install -y --no-install-recommends \
python3 \
python3-pip \
&& pip3 install jinja2 \
&& rm -rf /var/lib/apt/lists/*
##################
FROM golang:1.12.9-buster AS builder
COPY --from=base /debs /debs
WORKDIR /debs
RUN dpkg -i *.deb
WORKDIR /src/pigeon
COPY . .
RUN go build cmd/main/pgd.go
##################
FROM base
RUN rm -rf /debs
WORKDIR /root/
RUN mkdir logger
COPY --from=builder /src/pigeon/pgd .
COPY --from=builder /src/pigeon/logger logger
CMD ["./pgd"]
You can download the required packages in a temporary folder, move the debs in a new location and finally COPY the debs in the next stage. Finally you simply use the first image you've created.
BTW the containers will run as root. This might be an issue depending on what the software does, you might want to consider to use a user without "powers".
EDIT: sorry for the edits but I ran a couple of example locally and didn't have a go script ready.
At the COPY . . step, any time your source changes, the cache will bust and you will run all later steps again. You can reorder the steps to allow docker to cache the install of your dependencies. You can also join the apt-get install commands into one to reduce overhead of processing the package manager db.
FROM golang:1.12.9-buster AS builder
WORKDIR /src/pigeon
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
libsodium-dev \
libzmq3-dev \
libczmq-dev
COPY . .
RUN go build cmd/main/pgd.go
FROM debian:buster-slim
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
libsodium-dev \
libzmq3-dev \
libczmq-dev \
python3 \
python3-pip \
&& pip3 install jinja2
WORKDIR /root/
RUN mkdir logger
COPY --from=builder /src/pigeon/pgd .
COPY --from=builder /src/pigeon/logger logger
CMD ["./pgd"]
You will still install the packages twice, but now those installs are cached for future builds. The way to reuse the install of the libraries is to reorder the steps, installing the libraries in a common base image, and then install the go compiler on your build stage, but that will almost certainly be more overhead than installing libraries twice.
With BuildKit, you could share the apt cache between builds using an experimental syntax, but this requires that all builds use BuildKit (the syntax is not backwards compatible), and modifying docker's Debian image to preserve the apt package cache. From the BuildKit experimental documentation, there's the following example for apt:
# syntax = docker/dockerfile:experimental
FROM ubuntu
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN --mount=type=cache,target=/var/cache/apt --mount=type=cache,target=/var/lib/apt \
apt update && apt install -y gcc
https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md

Is it possible to make certain lines within a Dockerfile architecture dependent?

I have a Dockerfile and I need to include different lines depending on whether I'm running it on my development environment or a raspberry pi.
Can I add in some sort of architecture dependent IF statement around the only lines that vary?
# x64 version (shortened)
FROM node:10
COPY Gemfile* /usr/src/app/
WORKDIR /usr/src/app
RUN apt-get -y update
RUN apt-get -y install build-essential g++
RUN echo 'deb http://deb.debian.org/debian stretch main' > /etc/apt/sources.list
RUN apt-get -y update
RUN apt-get -y install ruby2.3 ruby2.3-dev
The apt source between architectures varies.
# ARM / Raspbian version. (shortened)
FROM node:10
COPY Gemfile* /usr/src/app/
WORKDIR /usr/src/app
RUN apt-get -y update
RUN apt-get -y install build-essential g++
RUN wget https://archive.raspbian.org/raspbian.public.key -O - | apt-key add -
RUN echo 'deb http://archive.raspbian.org/raspbian/ stretch main' > /etc/apt/sources.list
RUN apt-get -y update
RUN apt-get -y install ruby2.3 ruby2.3-dev
In docker file you can use ARG do define parameter for your build process like:
FROM node:10
ARG platform=x64
documentation for it
you can call it like this to change default value
docker build --build-arg platform=arm
and inside your docker file it behaves like any other variable so you can if on it:
RUN if [ "$platform" = "arm" ]; then ... else ... fi

Dockerfile error: /bin/sh: 1: [“python”,: not found

here is my Dockerfile tried to build:
FROM ubuntu:latest
# install flask server
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential
COPY app.py /
RUN pip install flask
# install ruby
RUN \
apt-get install -y ruby ruby-dev ruby-bundler && \
rm -rf /var/lib/apt/lists/*
# install lua
RUN apt-get update -y && apt-get install -y luajit luarocks
# Define default command.
CMD [“python”, “app.py”]
However, it shown up with error
/bin/sh: 1: [“python”,: not found
I have no idea why this happened. Could someone please help me with it?
Make sure to use the right CMD syntax with "", not “”:
CMD ["executable","param1","param2"] (exec form, this is the preferred form)

Resources