Docker image Package Patch within Dockerfile - docker

I have below docker image, where I need to update patch to curl package, in below Docker image, in Line number 3 I am already doing update, but it is shown up in Vulnerabilities report.
I have added, RUN yum -y update curl at the end of Dockerfile, then it is not showing up in Vulnerabilities report.
Any fix?, All Packages must install with latest version, I dont want to be mention explicitly
or any mistakes in Dockerfile?
FROM centos:7 AS base
FROM base AS build
# Install all dependenticies
RUN yum -y update \
&& yum install -y openssl-devel bzip2-devel libffi-devel \
zlib-devel wget gcc make
# Below compile python from source
FROM base
ENV LD_LIBRARY_PATH=/usr/local/lib64:/usr/local/lib
COPY --from=build /usr/local/ /usr/local/
# Copy Code
COPY . /app/
WORKDIR /app
#Install code dependecies.
RUN /usr/local/bin/python -m pip install --upgrade pip \
&& pip install -r requirements.txt
# Why, I need this step, when I already update RUN in line 3?, If I won't perform I see in compliance report, any fix?
RUN yum -y update curl
# run Application
ENTRYPOINT ["python"]
CMD ["test.py"]

In order to understand what constitutes an image, you need to look at a Dockerfile in a different way:
Every step (with the exception of FROM) creates a new image, with the results of the previous step as a base.
FROM doesn't use the previous step, but an explicitly specified one.
Now, looking at your Dockerfile, you seem to wonder why RUN yum -y update curl doesn't work as expected. For easier understanding, let's trace it backwards:
RUN yum -y update curl
RUN /usr/local/bin/python -m pip install --upgrade pip \ && pip install -r requirements.txt
WORKDIR /app
COPY . /app/
COPY --from=build /usr/local/ /usr/local/
ENV LD_LIBRARY_PATH=/usr/local/lib64:/usr/local/lib
FROM base -- at this point, the previous step is changed to the last step of base
FROM centos:7 AS base -- here, the previous step is changed to centos:7
As you see, nowhere in the earlier steps is yum update -y curl!
BTW: Typing this, I'm wondering what your precise question is, i.e. whether this works or doesn't or whether you wonder why it's necessary. Are you aware of the difference between yum update and yum update curl even?

docker build and friends have a cache system, based on the text of the input. So if the text of the command yum -y update doesn't change, it will continue using the same cached version of the output forever (or until the cache is deleted). Try running the build with --no-cache and see if that helps.

Related

Workflow for building python wheels in a multistage dockerfile with pipenv

In order to keep final docker image small, my usual approach to building python projects with binary dependencies is to build the pinned dependencies in a first stage and copy them to a final stage lacking the building toolchains. Broadly:
FROM python:3 as builder
RUN apt-get install -y libfoo-dev libbar-dev
COPY constraints.txt /
RUN pip wheel \
--constraint /constraints.txt \
--wheel-dir /wheels \
python-foo pyBar
FROM python:3-slim
RUN apt-get install -y libfoo libbar
COPY requirements.txt constraints.txt /
COPY --from=builder /wheels /wheels
RUN pip install \
--requirement /requirements.txt \
--constraint /constraints.txt \
--only-binary :all: \
--find-links /wheels
Now I am trying to something similar on a project managed with pipenv and I am quite astray on how to achieve the same effect: pre-building the few projects that lack a public wheel in a first stage for the version pinned in the lockfile, and use them in a later pipenv install --deploy in the final stage.
Does this even make sense with the hash checking pipenv does? Is there any alternative to reduce the final image size? I'd like to avoid using a private index where to store prebuilt wheels, I'd rather keep the solution contained in the Dockerfile.
Related question How to make lightweight docker image for python app with pipenv
A solution is to install a full virtualenv and copy it, not only some wheels.
FROM python:3 as builder
RUN apt-get install -y libfoo-dev libbar-dev
RUN pip install pipenv
WORKDIR /app
COPY Pipfile* /app
RUN mkdir /app/.venv
RUN pipenv install --deploy
FROM python:3-slim
RUN apt-get install -y libfoo libbar
WORKDIR /app
COPY --from=builder /app/.venv /app/.venv
ENV PATH=/app/.venv/bin:$PATH

Packages wont install in docker

I am trying to install tesseract-ocr to docker from a Dockerfile. When I build the Dockerfile everything looks normal and i get no errors but when I run the container tesseract is not installed.
If I access the container using sudo docker exec -t -i <container_id> /bin/bash and manually install tesseract using apt-get install -y tesseract-ocr-all it installs and works perfectly. Why doesn't it work when I try to install it during the build process?
My Dockerfile looks like this:
FROM ubuntu:20.04
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update \
&& apt-get install -y tesseract-ocr-all
RUN tesseract --version
FROM python:3.7
WORKDIR ocr
COPY requirements.txt .
RUN pip3 install --no-cache-dir -r requirements.txt
COPY . .
Thanks!
It looks like you are leveraging Docker multi-stage builds without realizing it.
When you put FROM python:3.7, you essentially throw away everything you have done above that, since you start a new stage.
The easiest solution I can see is to move
RUN apt-get update \
&& apt-get install -y tesseract-ocr-all
RUN tesseract --version
into the FROM python:3.7 stage, and remove the FROM ubuntu:20.04 stage.
You need to switch your user, since you likely don't have permission to run those commands. Something like this should work:
USER root
RUN apt-get update \
&& apt-get install -y tesseract-ocr-all
USER <switch back to previous user>
You'll need to figure out what the default user is to switch back, which you can probably find in the Ubuntu docs or using whoami.

Prevent docker from building the image from scratch after making changes to the code

A docker newbie, trying to develop in a docker container; I have a problem which is every time I make a single line change of code and try to rerun the container, docker will rebuild the image from scratch which takes a very long time; How should I set up the project correctly so it makes the best use of cache? Pretty sure it doesn't have to reinstall all the apt-get and pip installs (btw I am developing in python) whenever I make some changes to the source code. Anyone have any idea what I am missing. Appreciate any help.
My current docker file:
FROM tiangolo/uwsgi-nginx-flask:python3.6
# Copy the current directory contents into the container at /app
ADD ./app /app
# Run python's package manager and install the flask package
RUN apt-get update -y \
&& apt-get -y install default-jre \
&& apt-get install -y \
build-essential \
gfortran \
libblas-dev \
liblapack-dev \
libxft-dev \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
ADD ./requirements.txt /app/requirements.txt
RUN pip3 install -r requirements.txt
Once the cache breaks in a Dockerfile, all of the following lines will need to be rebuilt since they no longer have a cache hit. The cache search looks for an existing previous layer and an identical command (or contents of something like a COPY) to reuse the cache. If both do not match, then you have a cache miss and it performs the build step. For your scenario, you simply need to reorder your lines to make sure the frequently changing part is at the end rather than the beginning of the file:
FROM tiangolo/uwsgi-nginx-flask:python3.6
# Run python's package manager and install the flask package
RUN apt-get update -y \
&& apt-get -y install default-jre \
&& apt-get install -y \
build-essential \
gfortran \
libblas-dev \
liblapack-dev \
libxft-dev \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN pip3 install -r requirements.txt
# Copy the current directory contents into the container at /app
COPY app /app
I've also modified your ADD lines to COPY because you don't need the extra features provided by ADD.
During development, I'd recommend mounting app as a volume in your container so you don't need to rebuild the image for every code change. You can leave the COPY app /app inside your Dockerfile and the volume mount will simply overlay the directory, hiding anything in your image at that location. You only need to restart your container to pickup your modifications. Once finished, a build will create an image that looks identical to your development environment.

docker-compose update from S3 bucket

Our Dockerfile invokes a python script which copies a binary from S3 to /usr/bin. This works fine the first time. But from then on "docker-compose build" does nothing because everything is cached. This is a problem if the binary has changed.
Short of building with --no-cache, what is the best way to make sure "docker-compose build" will always pick up the new binary if there is one. We don't mind if it unnecessarily downloads the binary even if unchanged, so long as it does work then the binary has changed.
Seems like we want a Dockerfile step that always executes?
FROM ubuntu:trusty
RUN apt-get update
RUN apt-get -y install software-properties-common
RUN apt-get -y install --reinstall ca-certificates
RUN add-apt-repository ppa:fkrull/deadsnakes
RUN apt-get update && apt-get install -y \
curl \
wget \
vim \
git \
python3.5 \
python3-pip \
python3-setuptools \
libpcap0.8-dev
RUN ln -sf /usr/bin/python3.5 /usr/bin/python3
ADD . /app
WORKDIR /app
# Install Python Requirements
RUN pip3 install -r etc/python/requirements.txt
# Download/Install processor and associated libs
RUN python3 setup_processor.py
RUN mkdir -p /logs
ENTRYPOINT ["/app/entrypoint.sh"]
Where setup_processor.py downloads directly from S3 to /usr/bin.
So as of now there is no direct feature like this. But there is a workaround to your solution.
Add Build argument before your download step
ARG BUILD_ON=now
# Download/Install processor and associated libs
RUN python3 setup_processor.py
While building the image use below
docker build --build-arg BUILD_ON=$(date) ....
This will always make sure that you get a change in the ARG step and all steps cache after that will be invalidated
A feature has already been requested and being worked out on below thread
https://github.com/moby/moby/issues/1996

How do I add a package to an already existing image?

I have a RoR app that uses imagemagick specified in the Gemfile. I am using Docker's official rails image to build my image with the following Dockerfile:
FROM rails:onbuild
RUN apt-get install imagemagick
and get the following error:
Cant install RMagick 2.13.2. Cant find Magick-config in /usr/local/bundle/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Now, that's probably because the imagemagic package is missing on the OS, even though I specified it in my Dockerfile. So I guess the bundle install command is issued before my RUN apt-get command is issued.
My question - using this base image, is there a way to ensure imagemagic is installed prior to bundling?
Do I need to fork and change the base image Dockerfile to achieve that?
you are right, the ONBUILD instructions from the rails:onbuild image will be executed just after the FROM instruction of your Dockerfile.
What I suggest is to change your Dockerfile as follow:
FROM ruby:2.2.0
RUN apt-get install imagemagick
# throw errors if Gemfile has been modified since Gemfile.lock
RUN bundle config --global frozen 1
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN apt-get update && apt-get install -y nodejs --no-install-recommends && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y mysql-client postgresql-client sqlite3 --no-install-recommends && rm -rf /var/lib/apt/lists/*
COPY Gemfile /usr/src/app/
COPY Gemfile.lock /usr/src/app/
RUN bundle install
COPY . /usr/src/app
EXPOSE 3000
CMD ["rails", "server"]
which I made based on the rails:onbuild Dockerfile moving down the ONBUILD instructions and removing the ONBUILD flavor.
Most packages clean out the cache to save on size. Try this:
apt-get update && apt-get install imagemagick
Or spool up a copy of the container and look for yourself
docker run -it --remove <mycontainernameorid> /bin/bash
The --remove will ensure that the container is removed after you exit the shell. Once in the shell look for the package binary (or dpkg --list)

Resources