i'm trying to make a dockerfile that contains Nginx stable latest compiled with vts module .... there is a big problem i'm having that i can't find some automotive link when i put in the docker file it will download and install the latest stable nginx i can only specify a version like 1.14.2 is there a way i can modify my dockerfile to make it download the latest always and not only one version ?
this is my dockerfile
FROM debian:stretch-slim
RUN apt-get update && \
apt-get install -y git wget libreadline-dev libncurses5-dev libpcre3- dev libssl-dev perl make build-essential zlib1g-dev && \
cd /tmp/ && \
wget http://nginx.org/download/nginx-1.14.2.tar.gz && \
git clone git://github.com/vozlt/nginx-module-vts.git && \
tar zxvf nginx-1.14.2.tar.gz && \
rm -f nginx-1.14.2.tar.gz && \
cd nginx-1.14.2 && \
./configure --prefix=/tmp/nginx-1.14.2 --sbin-path=/usr/sbin/nginx -- modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf \
--error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid \
--lock-path=/var/run/nginx.lock --http-client-body-temp- path=/var/cache/nginx/client_temp --http-proxy-temp- path=/var/cache/nginx/proxy_temp \
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi- temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp- path=/var/cache/nginx/scgi_temp \
--user=nginx --group=nginx --with-compat --with-file-aio --with- threads --with-http_addition_module --with-http_auth_request_module \
--with-http_dav_module --with-http_flv_module --with- http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module \
--with-http_random_index_module --with-http_realip_module --with- http_secure_link_module --with-http_slice_module --with-http_ssl_module \
--with-http_stub_status_module --with-http_sub_module --with- http_v2_module --with-mail --with-mail_ssl_module --with-stream \
--with-stream_realip_module --with-stream_ssl_module --with- stream_ssl_preread_module \
--with-cc-opt='-g -O2 -fdebug-prefix-map=/data/builder/debuild/nginx- 1.14.2/debian/debuild-base/nginx-1.14.2=. -specs=/usr/share/dpkg/no-pie- compile.specs -fstack-protector-strong -Wformat -Werror=format-security - Wp,-D_FORTIFY_SOURCE=2 -fPIC' \
--with-ld-opt='-specs=/usr/share/dpkg/no-pie-link.specs -Wl,-z,relro - Wl,-z,now -Wl,--as-needed -pie' \
--add-module=/tmp/nginx-module-vts && \
make && make install && \
cp -f objs/nginx /usr/sbin/nginx && \
apt-get clean && rm -rf /var/lib/apt/lists/*
CMD ["nginx", "-g", "daemon off;"]
If you are looking for an easy way to keep using the stable version during compiling nginx from source as there is no one direct url for it afaik, then you can pass a build argument to your Dockerfile like this:
...
ARG NGINX_STABLE_VERSION
RUN wget http://nginx.org/download/nginx-${NGINX_STABLE_VERSION}.tar.gz
...
And run the build command like below to keep downloading nginx version based on the passed argument:
docker build --build-arg NGINX_STABLE_VERSION=1.14.2 .
However if you are look for how to keep using the official docker image for nginx with your custom modules - assuming all the custom modules you are using support dynamic modules feature like vts module - then you can do it by using multi-stage builds and make use of nginx dynamic modules
feature.
According to nginx-module-vts changelog there is a support for compiling the module as a dynamic module so you can do a multi-stage build that compile nginx with the module you want then copy the generated file to nginx image with the same version to make it work.
Nginx stable images can be found in here tagged with stable word.
All you need to do now is to modify the Dockerfile and make it use the dynamic modules way then add another stage for using the stable image with the new module that was generated from the first stage and you can add an argument during the build for example:
...
ARG NGINX_STABLE_VERSION
RUN wget http://nginx.org/download/nginx-${NGINX_STABLE_VERSION}.tar.gz
...
And run the build like this:
docker build --build-arg NGINX_STABLE_VERSION=1.14.2 .
Update:
Nginx does not provide a one link that you can use to get the stable version every time so you might go with parsing the html of the download page like the following to keep getting the latest download link for the stable version:
We rely on the HTML page which is not the most robust solution on the long term.
echo "http://nginx.org$(curl -s http://nginx.org/en/download.html | grep -oP 'Stable version.*?\K(/download/.*?tar.gz)')"
Output:
http://nginx.org/download/nginx-1.14.2.tar.gz
In your Dockerfile it can be like this:
Make sure that you have curl installed
RUN curl "http://nginx.org$(curl -s http://nginx.org/en/download.html | grep -oP 'Stable version.*?\K(/download/.*?tar.gz)')" --output nginx.tar.gz
Related
I faced with range case, when I was try to install some bundle in locally I faced with erro, but when I try install the same bundle in test server everything installed without error. I use docker-compose and install bundle inside in image. docker-copose and other docker file with all dependecies absolutelty the same, all in git.
composer require league/flysystem-bundle
and in locally I faced with that
Using version dev-master for league/flysystem-bundle
./composer.json has been updated
Loading composer repositories with package information
Updating dependencies (including require-dev)
Restricting packages listed in "symfony/symfony" to "5.0.*"
Installation failed, reverting ./composer.json to its original content.
[RuntimeException]
Could not load package ezsystems/ezplatform in http://repo.packagist.org: [
UnexpectedValueException] Could not parse version constraint dev-load-varni
sh-only-when-used as ^2.0#dev: Invalid version string "^2.0#dev"
[UnexpectedValueException]
Could not parse version constraint dev-load-varnish-only-when-used as ^2.0#
dev: Invalid version string "^2.0#dev"
locally composer version Composer version 1.10.11 2020-09-08 16:53:44
and test server
/var/www/symfony # composer require league/flysystem-bundle
Using version dev-master for league/flysystem-bundle
./composer.json has been updated
Loading composer repositories with package information
Updating dependencies (including require-dev)
Restricting packages listed in "symfony/symfony" to "5.0.*"
Prefetching 3 packages 🎶 💨
- Downloading (100%)
Package operations: 3 installs, 0 updates, 0 removals
- Installing league/mime-type-detection (1.4.0): Loading from cache
- Installing league/flysystem (1.x-dev 53f16fd): Loading from cache
- Installing league/flysystem-bundle (dev-master 525845a): Loading from cache
Package easycorp/easy-log-handler is abandoned, you should avoid using it. No replacement was suggested.
Package zendframework/zend-code is abandoned, you should avoid using it. Use laminas/laminas-code instead.
Package zendframework/zend-eventmanager is abandoned, you should avoid using it. Use laminas/laminas-eventmanager instead.
Writing lock file
Generating autoload files
20 packages you are using are looking for funding.
Use the `composer fund` command to find out more!
Symfony operations: 1 recipe (c67222ac592a52b7dec1c2cd56763685)
- WARNING league/flysystem-bundle (>=1.0): From github.com/symfony/recipes-contrib:master
The recipe for this package comes from the "contrib" repository, which is open to community contributions.
Review the recipe at https://github.com/symfony/recipes-contrib/tree/master/league/flysystem-bundle/1.0
Do you want to execute this recipe?
[y] Yes
[n] No
[a] Yes for all packages, only for the current installation session
[p] Yes permanently, never ask again for this project
(defaults to n):
ocramius/package-versions: Generating version class...
ocramius/package-versions: ...done generating version class
Executing script cache:clear [OK]
Executing script assets:install public [OK]
test server composer version Composer version 1.10.10 2020-08-03 11:35:19
my dockerfile
FROM alpine:edge
LABEL maintainer="Vincent Composieux <vincent.composieux#gmail.com>"
RUN apk add --update --no-cache \
coreutils \
yarn \
php7-fpm \
php7-apcu \
php7-ctype \
php7-curl \
php7-dom \
php7-gd \
php7-iconv \
php7-imagick \
php7-json \
php7-intl \
php7-mcrypt \
php7-fileinfo\
php7-mbstring \
php7-opcache \
php7-openssl \
php7-pdo \
php7-pdo_mysql \
php7-mysqli \
php7-pdo_pgsql \
php7-pgsql \
php7-xml \
php7-zlib \
php7-phar \
php7-tokenizer \
php7-session \
php7-simplexml \
php7-xdebug \
php7-zip \
php7-xmlwriter \
make \
curl \
zlib-dev \
libxml2-dev \
rabbitmq-c-dev \
oniguruma-dev \
php7-pecl-amqp \
php7-amqp \
php7-redis
RUN apk add --no-cache --repository=http://dl-cdn.alpinelinux.org/alpine/edge/testing/ php7-pecl-mongodb
RUN echo "$(curl -sS https://composer.github.io/installer.sig) -" > composer-setup.php.sig \
&& curl -sS https://getcomposer.org/installer | tee composer-setup.php | sha384sum -c composer-setup.php.sig \
&& php composer-setup.php && rm composer-setup.php* \
&& chmod +x composer.phar && mv composer.phar /usr/bin/composer
COPY symfony.ini /etc/php7/conf.d/
COPY symfony.ini /etc/php7/cli/conf.d/
COPY xdebug.ini /etc/php7/conf.d/
COPY symfony.pool.conf /etc/php7/php-fpm.d/
CMD ["php-fpm7", "-F"]
WORKDIR /var/www/symfony
EXPOSE 9001
Why in the same time I faced with differnt version composer. Composer installed by the same way, by the same Dockerfile. How to fix this problem ?
I don't belive, how it's possible, this problem don't should be appear when using docker structure.. ?
Looks like composer 1.10.11 is broken. You can switch to 1.10.10 like this:
composer-setup.php --version=1.10.10
Confirmed that error from composer 1.10.11. You should down to composer v1.10.10.
composer self-update 1.10.10
You can use self-update to downgrade composer version
Now you can update your composer version to 1.10.12
In this version you don't have this error.
I'm trying to build a Docker image including a very particular configuration of OpenCV with CUDA and GPU support.
The build succeeds, and if I make install it from the same context that built the image, it works with no problems.
The problem happens when I try to use a multi stage build, to avoid keeping all the dependencies needed to build OpenCV. Before you continue reading, what follows might actually be an XY problem, if you have a better solution on how to copy OpenCV build artifacts (including Python bindings!) in a Docker multistage build, that is my actual intent.
Now for my attempted solution and the struggle I have:
I run COPY --from=requirements /opencv /opencv and it works and it apparently copies everything in the right path (I checked the filesystem). But, when I run from the build folder make install, I get this CMake error:
CMake Error: The source directory "" does not exist.
Specify --help for usage, or press the help button on the CMake GUI.
Makefile:2724: recipe for target 'cmake_check_build_system' failed
make: *** [cmake_check_build_system] Error 1
Again, the same command, from the same folder, but without multistage build, works.
Here is my Dockerfile:
# Stage 1: Build
FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04 AS requirements
# Install dependencies
RUN echo "deb http://es.archive.ubuntu.com/ubuntu eoan main universe" | tee -a /etc/apt/sources.list
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install build-essential cmake unzip pkg-config libjpeg-dev libpng-dev libtiff-dev libavcodec-dev \
libavformat-dev libswscale-dev libv4l-dev libxvidcore-dev libx264-dev libgtk-3-dev libatlas-base-dev \
gfortran python3-dev libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libxvidcore-dev x264 \
libx264-dev libfaac-dev libmp3lame-dev libtheora-dev libfaac-dev libmp3lame-dev libvorbis-dev \
libjpeg-dev libpng-dev libtiff-dev git python3-pip libtbb-dev libprotobuf-dev protobuf-compiler \
libgoogle-glog-dev libgflags-dev libgphoto2-dev libeigen3-dev libhdf5-dev wget libtbb-dev gcc-8 g++-8 llvm \
python3-venv libgirepository1.0-dev
# Install my project requirements
WORKDIR /venv
RUN python3 -m venv /venv
ENV PATH="/venv/bin:$PATH"
ADD requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
# Build OpenCV
WORKDIR /opencv
RUN wget https://github.com/opencv/opencv/archive/4.4.0.zip && mv 4.4.0.zip opencv.zip && unzip opencv.zip && rm opencv.zip
RUN wget https://github.com/opencv/opencv_contrib/archive/4.4.0.zip && mv 4.4.0.zip opencv_contrib.zip && unzip opencv_contrib.zip && rm opencv_contrib.zip
WORKDIR /opencv/opencv-4.4.0/build
ENV SITE_PACKAGES /venv/lib/python3.7/site-packages
ENV EXTRA_MODULES /opencv/opencv_contrib-4.4.0/modules
ENV CUDA_ARCH 7.5
ADD docker/build_opencv.sh .
RUN ./build_opencv.sh
# Stage 2: runtime
FROM nvidia/cuda:10.2-cudnn7-runtime-ubuntu18.04
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install build-essential cmake python3-venv
# Install OpenCV
COPY --from=requirements /opencv /opencv
WORKDIR /opencv/opencv-4.4.0/build
RUN make install && ldconfig
# build fails here and the rest is specific to my project so I've ommitted it
The build_opencv.sh script has this options:
#!/bin/bash
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_C_COMPILER=/usr/bin/gcc-8 \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=OFF \
-D INSTALL_C_EXAMPLES=OFF \
-D WITH_TBB=ON \
-D WITH_CUDA=ON \
-D BUILD_opencv_cudacodec=OFF \
-D ENABLE_FAST_MATH=1 \
-D CUDA_FAST_MATH=1 \
-D WITH_CUBLAS=1 \
-D WITH_V4L=ON \
-D WITH_QT=OFF \
-D WITH_OPENGL=ON \
-D WITH_GSTREAMER=ON \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D OPENCV_PC_FILE_NAME=opencv.pc \
-D OPENCV_ENABLE_NONFREE=ON \
-D OPENCV_PYTHON3_INSTALL_PATH=$SITE_PACKAGES \
-D OPENCV_EXTRA_MODULES_PATH=$EXTRA_MODULES \
-D PYTHON_EXECUTABLE=/usr/bin/python3 \
-D WITH_CUDNN=ON \
-D OPENCV_DNN_CUDA=ON \
-D CUDA_ARCH_BIN=$CUDA_ARCH \
-D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-10.2 \
-D WITH_GTK_2_X=OFF \
-D BUILD_EXAMPLES=OFF ..
make -j16
You need at least numpy in your requirements.txt file.
In order to reproduce the issue, a minimal setup would have this structure:
- docker
- Dockerfile
- build_opencv.sh
- requirements.txt
Build using from the root of the build context:
docker build -t opencvmultistage:latest -f docker/Dockerfile .
Am I doing something wrong? Maybe CMake has some weird cache that I'm not copying to the new image and makes the build fail?
For the sake of clarity, if I add make install in the build_opencv.sh script it works, but I have OpenCV installed in the build context and not the runtime, which is not what I pretend to do. make install runs in the same directory, and the same files should be present, so I don't really know what's going on.
It is simpler to run cmake & make and make install in the same stage and then copy the install folders. It will allow to not have any build tools like cmake or build-essential in the final docker image.
We will use a custom CMAKE_INSTALL_PREFIX so that OpenCV binaries are installed to a directory and we can copy it straight to the next stage. Using a custom prefix will avoid having to copy CUDA installation or development libraries no longer required. Then we will run ldconfig on that directory to link the libraries as usual.
Modify the build script to use a custom CMAKE_INSTALL_PREFIX:
mkdir /prefix
cmake -D CMAKE_BUILD_TYPE=RELEASE \
# all compiler flags...
-D CMAKE_INSTALL_PREFIX=/prefix
Modifying the Dockerfile
to run make install in stage 1
# Stage 1: Build
FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04 AS requirements
...
ADD build_opencv.sh .
RUN ./build_opencv.sh && make install
copy the installation in stage 2
# Stage 2: runtime
FROM nvidia/cuda:10.2-cudnn7-runtime-ubuntu18.04
RUN apt-get update && apt-get -y upgrade
RUN apt-get -y install build-essential python3-venv
# Install OpenCV
COPY --from=requirements /prefix /prefix
COPY --from=requirements /venv /venv
ENV PATH="/venv/bin:$PATH"
RUN ldconfig /prefix
When I use curl --head to test my website, it returns the server information.
I followed this tutorial to hide the nginx server header.
But when I run the command yum install nginx-module-security-headers
, it returns yum: not found.
I also tried apk add nginx-module-security-headers, and it shows that the package is missing.
I have used nginx:1.17.6-alpine as my base docker image. Does anyone know how to hide the server from header under this Alpine?
I think I have an easier solution here: https://gist.github.com/hermanbanken/96f0ff298c162a522ddbba44cad31081. Big thanks to hermanbanken on Github for sharing this gist.
The idea is to create a multi stage build with the nginx alpine image to be a base for compiling the module. This turns into the following Dockerfile:
ARG VERSION=alpine
FROM nginx:${VERSION} as builder
ENV MORE_HEADERS_VERSION=0.33
ENV MORE_HEADERS_GITREPO=openresty/headers-more-nginx-module
# Download sources
RUN wget "http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz" -O nginx.tar.gz && \
wget "https://github.com/${MORE_HEADERS_GITREPO}/archive/v${MORE_HEADERS_VERSION}.tar.gz" -O extra_module.tar.gz
# For latest build deps, see https://github.com/nginxinc/docker-nginx/blob/master/mainline/alpine/Dockerfile
RUN apk add --no-cache --virtual .build-deps \
gcc \
libc-dev \
make \
openssl-dev \
pcre-dev \
zlib-dev \
linux-headers \
libxslt-dev \
gd-dev \
geoip-dev \
perl-dev \
libedit-dev \
mercurial \
bash \
alpine-sdk \
findutils
SHELL ["/bin/ash", "-eo", "pipefail", "-c"]
RUN rm -rf /usr/src/nginx /usr/src/extra_module && mkdir -p /usr/src/nginx /usr/src/extra_module && \
tar -zxC /usr/src/nginx -f nginx.tar.gz && \
tar -xzC /usr/src/extra_module -f extra_module.tar.gz
WORKDIR /usr/src/nginx/nginx-${NGINX_VERSION}
# Reuse same cli arguments as the nginx:alpine image used to build
RUN CONFARGS=$(nginx -V 2>&1 | sed -n -e 's/^.*arguments: //p') && \
sh -c "./configure --with-compat $CONFARGS --add-dynamic-module=/usr/src/extra_module/*" && make modules
# Production container starts here
FROM nginx:${VERSION}
COPY --from=builder /usr/src/nginx/nginx-${NGINX_VERSION}/objs/*_module.so /etc/nginx/modules/
.... skipped inserting config files and stuff ...
# Validate the config
RUN nginx -t
Alpine repo probably doesn't have the ngx_security_headers module but, the mentioned tutorial also provides an option of using Headers More module. You should be able to install this module in your alpine distro using the command:
apk add nginx-mod-http-headers-more
Hope it helps.
Source
I found the alternate solution. The reason that it shows binary not compatible is because I have one nginx pre-installed under the target route, and it is not compatible with the header-more module I am using. That means I cannot simply install the third party library from Alpine package.
So I prepare a clean Alpine OS, and follow the GitHub repository to build Nginx from the source with additional feature. The path of build result is the prefix path you specified.
I'm rather new to Docker and I'm trying to make a simple Dockerfile that combines an alpine image with a python one.
This is what the Dockerfile looks like:
FROM alpine
RUN apk update &&\
apk add -q --progress \
bash \
bats \
curl \
figlet \
findutils \
git \
make \
mc \
nodejs \
openssh \
sed \
wget \
vim
ADD ./src/ /home/src/
WORKDIR /home/src/
FROM python:3.7.4-slim
When running:
docker build -t alp-py .
the image builds as normal.
When I run
docker run -it alp-py bash
I can access the bash, but when I cd to /home/ and ls, it shows an empty directory:
root#5fb77bbc81a1:/# cd home
root#5fb77bbc81a1:/home# ls
root#5fb77bbc81a1:/home#
I've alredy tried changing ADD to COPY and also trying:
CPOY . /home/src/
but nothing works.
What am I doing wrong? Am I missing something?
Thanks!
There is no such thing as "combining 2 images". You should see the images as different virtual machines (only for the purpose of understanding the concept - because they are more than that). You cannot combine them.
In your example you can start directly with the python image and install the tools you need on top of it:
FROM python:3.7.4-slim
RUN apt update &&\
apt-get install -y \
bash \
bats \
curl \
figlet \
findutils \
git \
make \
mc \
nodejs \
openssh \
sed \
wget \
vim
ADD ./src/ /home/src/
WORKDIR /home/src/
I didn't test if all the packages are available so you might want to so a bit of research to get them all in case you get errors.
When you use 2 FROM statements in your Dockerfile you are creating a multi-stage build. That is useful if you want to create a final image that doesn't contain your source code, but only binaries of your product (first stage build the source and the second only copies the binaries from the first one).
docker image inspect <name>
gives me 16GB
and about 20 layers
When I am logged as root, this
du -hs /
show me just 2GB
FYI, there are already very multi-lines RUN commands in Dockerfile.
can I squash all layers into one layer without touching Dockerfile, rebuilding etc?
or possibly by adding extra action to Dockerfile which clear/improve caching
Dockerfile is
FROM heroku/heroku:18
ENV PYENV_ROOT="/pyenv"
ENV PATH="/pyenv/shims:/pyenv/bin:$PATH"
ENV PYTHON_VERSION 3.5.6
ENV GPG_KEY <value>
ENV PYTHONUNBUFFERED 1
ENV TERM xterm
ENV EDITOR vim
RUN apt-get update && apt-get install -y \
build-essential \
gdal-bin \
binutils \
iputils-ping \
libjpeg8 \
libproj-dev \
libjpeg8-dev \
libtiff-dev \
zlib1g-dev \
libfreetype6-dev \
liblcms2-dev \
libxml2-dev \
libxslt1-dev \
libssl-dev \
libncurses5-dev \
virtualenv \
python-pip \
python3-pip \
python-dev \
libmysqlclient-dev \
mysql-client-5.7 \
libpq-dev \
libcurl4-gnutls-dev \
libgnutls28-dev \
libbz2-dev \
tig \
git \
vim \
nano \
tmux \
tmuxinator \
fish \
sudo \
libnet-ifconfig-wrapper-perl \
ruby \
libssl-dev \
nodejs \
strace \
tcpdump \
# npm & grunt
&& curl -L https://npmjs.com/install.sh | sh \
&& npm install -g grunt-cli grunt \
# ruby & foreman
&& gem install foreman \
# installing pyenv
&& curl https://raw.githubusercontent.com/yyuu/pyenv-installer/master/bin/pyenv-installer | bash
COPY . /app
COPY ./requirements /requirements
COPY ./requirements.txt /requirements.txt
COPY ./docker/docker_compose/django/foreman.sh /foreman.sh
COPY ./docker/docker_compose/django/Procfile /Procfile
COPY ./docker/docker_compose/django/entrypoint.sh /entrypoint.sh
# ADD sudoer user django with password django
RUN groupadd -r django -g 1000 && \
useradd -ms /usr/bin/fish -p $(openssl passwd -1 django) --uid 1000 --gid 1000 -r -g django django && \
usermod -a -G sudo django && \
chown -R django:django /app
COPY --chown=django:django ./docker/docker_compose/django/fish /home/django/.config/fish
COPY --chown=django:django ./docker/docker_compose/django/tmuxinator /home/django/.tmuxinator
COPY ./docker/docker_compose/django/fish /root/.config/fish
WORKDIR /app
RUN sed -i 's/\r//' /entrypoint.sh \
&& sed -i 's/\r//' /foreman.sh \
&& chmod +x /entrypoint.sh \
&& chown django /entrypoint.sh \
&& chmod +x /foreman.sh \
&& chown django /foreman.sh \
&& chown -R django:django /home/django/ \
&& pyenv install ${PYTHON_VERSION%%} \
&& mkdir -p /app/log \
&& pyenv global ${PYTHON_VERSION%%} \
&& pyenv rehash \
&& ${PYENV_ROOT%%}/versions/${PYTHON_VERSION%%}/bin/pip install -U pip \
&& ${PYENV_ROOT%%}/versions/${PYTHON_VERSION%%}/bin/pip install -r /requirements.txt \
&& chown -R django:django /pyenv/ \
&& ${PYENV_ROOT%%}/versions/${PYTHON_VERSION%%}/bin/pip install -r /requirements/dev_requirements.txt
# this user receives ENVs from the top
USER django
ENTRYPOINT ["/entrypoint.sh"]
What I've tried so far:
The --squash option from experimental mode of docker build is rather not for me. That Dockerfile is one of more Dockerfiles inside docker-compose.
I've also checked this:
https://github.com/jwilder/docker-squash
but seems docker load cannot load a squashed image.
also, that squash gives me 8GB (still far away from expected ~2GB)
docker save <image_id> | docker-squash -t latest_tiny | docker load
update after answers:
when I've added this:
&& apt-get autoremove \ # ? to consider
&& apt-get clean \ # ? to consider
&& rm -rf /var/lib/apt/lists/*
to apt-get and --no-cache-dir to each pip, the result was 72GB (yes, even much more - docker images shows 36GB before pip command, and 72GB as final size).
my working directory is clear (regarding COPY). du -hs / (as a root) still has 2GB. And all images were removed before rebuilding.
Following the #Mihai approach, I was able to slim down the image from 16GB to 9GB.
There is a simple trick to get rid of the intermediate layers. It will bring down the size as well but with how much depends on how it was built.
Create a Dockerfile like this:
FROM your_image as initial
FROM your_image_base
COPY --from=initial / /
your_image_base should be something like 'alpine' - so the smallest image from which your image and its parents descend from.
Now build the image and check the history and size:
docker build -t your-image:2.0 .
docker image history your-image:2.0
docker image ls
This way you do create a new Dockerfile (if that is acceptable for your process) without touching the initial Dockerfile.
Let me know if this solves your issue.
UPDATE AFTER SEEING THE Dockerfile:
maybe I miss it but I don't see you cleaning up the apt-get cache after you perform the installations. Your big RUN command should end with "&& rm -rf /var/lib/apt/lists/*" on the same line so that it doesn't store the whole cache on the layer.
Definitely add && rm -rf /var/lib/apt/lists/* on the end of your main run command, like Mihai said. Another thing that may help (depending on how big your dependencies are) is installing with pip using the --no-cache-dir option . Also, make sure you understand build context and consider using either a .dockerignore or sending the context to another directory (totally depends on how you're directory is setup)
I've also had luck exploring an image using dive. Honestly this looks like a pretty big image so not sure how much you're going to be able to get it down
To squash a (Docker) container image, without re-building the image or manipulating the original Dockerfile,
You can extend from your image and squash it:
docker build --squash -t your_image_squashed - <<< "FROM your_image"
It's very easy, just use
docker commit YOUR_CONTAINER_ID NEW_IMAGE_ID
The docker will throw away the intermediate layers, you lost history but the size is small