How to check what Julia packages are installed in `jupyter/datascience-notebook`? - docker

I'm in Julia x Jupyter project and I chose to use jupyter/datascience-notebook Docker Image which contains jupyterlab and Julia environment.
I want to know what package for Julia is pre-installed in jupyter/datascience-notebook and know what extra packages I need to install manually.
I read Dockerfile of datascience-notebook to know what package is installed in the jupyter/datascience-notebook image, but I could not find lines for designating julia packages.
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
ARG OWNER=jupyter
ARG BASE_CONTAINER=$OWNER/scipy-notebook
FROM $BASE_CONTAINER
LABEL maintainer="Jupyter Project <jupyter#googlegroups.com>"
# Fix DL4006
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
USER root
# Julia installation
# Default values can be overridden at build time
# (ARGS are in lower case to distinguish them from ENV)
# Check https://julialang.org/downloads/
ARG julia_version="1.7.1"
# R pre-requisites
RUN apt-get update --yes && \
apt-get install --yes --no-install-recommends \
fonts-dejavu \
gfortran \
gcc && \
apt-get clean && rm -rf /var/lib/apt/lists/*
# Julia dependencies
# install Julia packages in /opt/julia instead of ${HOME}
ENV JULIA_DEPOT_PATH=/opt/julia \
JULIA_PKGDIR=/opt/julia \
JULIA_VERSION="${julia_version}"
WORKDIR /tmp
# hadolint ignore=SC2046
RUN set -x && \
julia_arch=$(uname -m) && \
julia_short_arch="${julia_arch}" && \
if [ "${julia_short_arch}" == "x86_64" ]; then \
julia_short_arch="x64"; \
fi; \
julia_installer="julia-${JULIA_VERSION}-linux-${julia_arch}.tar.gz" && \
julia_major_minor=$(echo "${JULIA_VERSION}" | cut -d. -f 1,2) && \
mkdir "/opt/julia-${JULIA_VERSION}" && \
wget -q "https://julialang-s3.julialang.org/bin/linux/${julia_short_arch}/${julia_major_minor}/${julia_installer}" && \
tar xzf "${julia_installer}" -C "/opt/julia-${JULIA_VERSION}" --strip-components=1 && \
rm "${julia_installer}" && \
ln -fs /opt/julia-*/bin/julia /usr/local/bin/julia
# Show Julia where conda libraries are \
RUN mkdir /etc/julia && \
echo "push!(Libdl.DL_LOAD_PATH, \"${CONDA_DIR}/lib\")" >> /etc/julia/juliarc.jl && \
# Create JULIA_PKGDIR \
mkdir "${JULIA_PKGDIR}" && \
chown "${NB_USER}" "${JULIA_PKGDIR}" && \
fix-permissions "${JULIA_PKGDIR}"
USER ${NB_UID}
# R packages including IRKernel which gets installed globally.
# r-e1071: dependency of the caret R package
RUN mamba install --quiet --yes \
'r-base' \
'r-caret' \
'r-crayon' \
'r-devtools' \
'r-e1071' \
'r-forecast' \
'r-hexbin' \
'r-htmltools' \
'r-htmlwidgets' \
'r-irkernel' \
'r-nycflights13' \
'r-randomforest' \
'r-rcurl' \
'r-rodbc' \
'r-rsqlite' \
'r-shiny' \
'rpy2' \
'unixodbc' && \
mamba clean --all -f -y && \
fix-permissions "${CONDA_DIR}" && \
fix-permissions "/home/${NB_USER}"
# These packages are not easy to install under arm
RUN set -x && \
arch=$(uname -m) && \
if [ "${arch}" == "x86_64" ]; then \
mamba install --quiet --yes \
'r-rmarkdown' \
'r-tidymodels' \
'r-tidyverse' && \
mamba clean --all -f -y && \
fix-permissions "${CONDA_DIR}" && \
fix-permissions "/home/${NB_USER}"; \
fi;
# Add Julia packages.
# Install IJulia as jovyan and then move the kernelspec out
# to the system share location. Avoids problems with runtime UID change not
# taking effect properly on the .local folder in the jovyan home dir.
RUN julia -e 'import Pkg; Pkg.update()' && \
julia -e 'import Pkg; Pkg.add("HDF5")' && \
julia -e 'using Pkg; pkg"add IJulia"; pkg"precompile"' && \
# move kernelspec out of home \
mv "${HOME}/.local/share/jupyter/kernels/julia"* "${CONDA_DIR}/share/jupyter/kernels/" && \
chmod -R go+rx "${CONDA_DIR}/share/jupyter" && \
rm -rf "${HOME}/.local" && \
fix-permissions "${JULIA_PKGDIR}" "${CONDA_DIR}/share/jupyter"
WORKDIR "${HOME}"
How can I check out what Julia packages are installed in jupyter/datascience-notebook?

In your code julia -e 'import Pkg; Pkg.add("HDF5")' installs HDF5 package and it is the only package installed beside of IJulia.
If you want to show the list of packages installed in your Julia environment you can do:
julia -e 'using Pkg;Pkg.status()'
Looking at your code this will work correctly as long as you run it with the Julia depot location env variable set to: JULIA_DEPOT_PATH=/opt/julia

Related

Optimizing dockerfile image size. What more & how can i reduce size of this image?

Its my first question so hello world
So i'am beginner, unfortunately in both GNU/Linux and dockerizing things.
I got an image that reason to exist is having all in one image for bitbucket-pipelines and azure-pipelines. (Multi-project image).
During forced update (added groovy and changed nodejs source due to problems with ssl) Image size went up form 1GB to 1.5GB.
With my tweaks i managed to free 150MB to current 1.35GB
My tweeks
adding some rm
apt-get clean
npm cache clean
merged many layers to as few as i could done
CURRENT DOCKERFILE
FROM ubuntu:18.04
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en' LC_ALL='en_US.UTF-8'
WORKDIR ~/
USER root
ARG USERNAME=root
RUN apt-get update && \
apt-get -y --no-install-recommends install locales \
build-essential \
git \
maven \
ant \
unzip \
python3 \
zip \
wget \
apt-transport-https \
ca-certificates \
curl \
software-properties-common \
&& echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen \
&& locale-gen en_US.UTF-8 \
&& apt-get autoremove -y \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ENV JAVA_VERSION jdk-12.0.2+10
RUN set -eux; \
ARCH="$(dpkg --print-architecture)"; \
case "${ARCH}" in \
aarch64|arm64) \
ESUM='855f046afc5a5230ad6da45a5c811194267acd1748f16b648bfe5710703fe8c6'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk12-binaries/releases/download/jdk-12.0.2%2B10/OpenJDK12U-jdk_aarch64_linux_hotspot_12.0.2_10.tar.gz'; \
;; \
armhf) \
ESUM='9fec85826ffb7b2b2cf2853a6ed3e001b528ed5cf13e435cd13026398b5178d8'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk12-binaries/releases/download/jdk-12.0.2%2B10/OpenJDK12U-jdk_arm_linux_hotspot_12.0.2_10.tar.gz'; \
;; \
ppc64el|ppc64le) \
ESUM='4b0c9f5cdea1b26d7f079fa6478aceebf1923c947c4209d5709c0869dd71b98f'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk12-binaries/releases/download/jdk-12.0.2%2B10/OpenJDK12U-jdk_ppc64le_linux_hotspot_12.0.2_10.tar.gz'; \
;; \
s390x) \
ESUM='9897deeaf7a2c90374fbaca8b3eb8e63267d8fc1863b43b21c0bfc86e4783470'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk12-binaries/releases/download/jdk-12.0.2%2B10/OpenJDK12U-jdk_s390x_linux_hotspot_12.0.2_10.tar.gz'; \
;; \
amd64|x86_64) \
ESUM='1202f536984c28d68681d51207a84b6c76e5998579132d3fe1b8085aa6a5f21e'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk12-binaries/releases/download/jdk-12.0.2%2B10/OpenJDK12U-jdk_x64_linux_hotspot_12.0.2_10.tar.gz'; \
;; \
*) \
echo "Unsupported arch: ${ARCH}"; \
exit 1; \
;; \
esac; \
curl -LfsSo /tmp/openjdk.tar.gz ${BINARY_URL}; \
echo "${ESUM} */tmp/openjdk.tar.gz" | sha256sum -c -; \
mkdir -p /opt/java/openjdk; \
cd /opt/java/openjdk; \
tar -xf /tmp/openjdk.tar.gz --strip-components=1; \
rm -rf /tmp/openjdk.tar.gz;
ENV JAVA_HOME="/opt/java/openjdk" \
PATH="/opt/java/openjdk/bin:$PATH" \
ANT_HOME="/usr/share/java/apache-ant" \
PATH="$PATH:$ANT_HOME/bin" \
GROOVY_HOME="/$USERNAME/.sdkman/candidates/groovy/3.0.8" \
PATH="$PATH:/$USERNAME/.sdkman/candidates/groovy/3.0.8/bin"
RUN curl -fsSL https://deb.nodesource.com/setup_lts.x | bash - \
&& apt-get install -f \
&& apt-get install -y nodejs \
&& apt-get autoremove -y \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& curl -s "https://get.sdkman.io" | bash \
&& yes | /bin/bash -l -c "source $HOME/.sdkman/bin/sdkman-init.sh \
&& sdk install groovy \
&& rm -rf $HOME/.sdkman/archives/* \
&& rm -rf $HOME/.sdkman/tmp/*" \
&& npm install -g npm \
&& npm install -g lodash \
&& npm install -g sfdc-generate-package \
&& npm install -g jsforce \
&& npm cache clean --force
#TESTS
CMD echo "print env varaibles: " && printenv \
&& echo "XXX PATHS: " && echo "$PATH \n" \
&& echo "GROOVY_HOME " && echo "$GROOVY_HOME " \
&& echo "HOME " && echo "$HOME " \
&& echo "XXX SOFTWARE VERSIONS:" \
&& echo "nodejs :" && nodejs -v \
&& echo "npm :" && npm -v \
&& git --version \
&& ant -version \
&& python3 --version \
&& java -version \
&& groovy -version
DOCKER HISTORY
IMAGE CREATED BY SIZE
d7f3822f32da /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "echo… 0B
7f2f0f621ae8 |1 USERNAME=root /bin/sh -c curl -fsSL https… 413MB
a03ee701466f /bin/sh -c #(nop) ENV JAVA_HOME=/opt/java/o… 0B
8f3a9e1ce43c |1 USERNAME=root /bin/sh -c set -eux; AR… 350MB
ae7b362dfaee /bin/sh -c #(nop) ENV JAVA_VERSION=jdk-12.0… 0B
72fc4ae7e73f |1 USERNAME=root /bin/sh -c apt-get update &… 521MB```
There are several ways to make your Docker Image smaller.
Looking at your example 2 thing come to mind:
Try to create an Image in more than one Stage. Take the tools you need to creating the image in one Stage and create the last version of the container by only copying files from the previous Stages.
See Docker documentation on MultiStage Containers
You are taking a Ubuntu image, which if very large. Better to take Alpine in the last Stage
In that Ubuntu container you are setting up the whole application (Python, Maven, Java).
This is not the philosophy of Docker. Better to create an Image for every service. Python-container, Java-container, etc. And with this setup try to stick to standard images.
The moment you need to do apt-get in a container you need to think where you went wrong and how you can split it up in different containers.
For the different containers talking to each other, use docker-compose.

Why does using multiple Dockerfiles produce a smaller image than a multi-stage build?

The repository jupyter/docker-stacks provides multiple Dockerfiles for Jupyter Notebook images. These Dockerfiles build on each other in the following form:
This Dockerfile is jupyter/base-notebook
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
# Ubuntu 20.04 (focal)
# https://hub.docker.com/_/ubuntu/?tab=tags&name=focal
# OS/ARCH: linux/amd64
ARG ROOT_CONTAINER=ubuntu:focal-20210416#sha256:86ac87f73641c920fb42cc9612d4fb57b5626b56ea2a19b894d0673fd5b4f2e9
ARG BASE_CONTAINER=$ROOT_CONTAINER
FROM $BASE_CONTAINER
LABEL maintainer="Jupyter Project <jupyter#googlegroups.com>"
ARG NB_USER="jovyan"
ARG NB_UID="1000"
ARG NB_GID="100"
# Fix DL4006
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
USER root
# ---- Miniforge installer ----
# Default values can be overridden at build time
# (ARGS are in lower case to distinguish them from ENV)
# Check https://github.com/conda-forge/miniforge/releases
# Conda version
ARG conda_version="4.10.1"
# Miniforge installer patch version
ARG miniforge_patch_number="0"
# Miniforge installer architecture
ARG miniforge_arch="x86_64"
# Package Manager and Python implementation to use (https://github.com/conda-forge/miniforge)
# - conda only: either Miniforge3 to use Python or Miniforge-pypy3 to use PyPy
# - conda + mamba: either Mambaforge to use Python or Mambaforge-pypy3 to use PyPy
ARG miniforge_python="Mambaforge"
# Miniforge archive to install
ARG miniforge_version="${conda_version}-${miniforge_patch_number}"
# Miniforge installer
ARG miniforge_installer="${miniforge_python}-${miniforge_version}-Linux-${miniforge_arch}.sh"
# Miniforge checksum
ARG miniforge_checksum="d4065b376f81b83cfef0c7316f97bb83337e4ae27eb988828363a578226e3a62"
# Install all OS dependencies for notebook server that starts but lacks all
# features (e.g., download as all possible file formats)
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get -q update && \
apt-get install -yq --no-install-recommends \
tini \
wget \
ca-certificates \
sudo \
locales \
fonts-liberation \
run-one && \
apt-get clean && rm -rf /var/lib/apt/lists/* && \
echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \
locale-gen
# Configure environment
ENV CONDA_DIR=/opt/conda \
SHELL=/bin/bash \
NB_USER=$NB_USER \
NB_UID=$NB_UID \
NB_GID=$NB_GID \
LC_ALL=en_US.UTF-8 \
LANG=en_US.UTF-8 \
LANGUAGE=en_US.UTF-8
ENV PATH=$CONDA_DIR/bin:$PATH \
HOME=/home/$NB_USER \
CONDA_VERSION="${conda_version}" \
MINIFORGE_VERSION="${miniforge_version}"
# Copy a script that we will use to correct permissions after running certain commands
COPY fix-permissions /usr/local/bin/fix-permissions
RUN chmod a+rx /usr/local/bin/fix-permissions
# Enable prompt color in the skeleton .bashrc before creating the default NB_USER
# hadolint ignore=SC2016
RUN sed -i 's/^#force_color_prompt=yes/force_color_prompt=yes/' /etc/skel/.bashrc && \
# Add call to conda init script see https://stackoverflow.com/a/58081608/4413446
echo 'eval "$(command conda shell.bash hook 2> /dev/null)"' >> /etc/skel/.bashrc
# Create NB_USER with name jovyan user with UID=1000 and in the 'users' group
# and make sure these dirs are writable by the `users` group.
RUN echo "auth requisite pam_deny.so" >> /etc/pam.d/su && \
sed -i.bak -e 's/^%admin/#%admin/' /etc/sudoers && \
sed -i.bak -e 's/^%sudo/#%sudo/' /etc/sudoers && \
useradd -l -m -s /bin/bash -N -u $NB_UID $NB_USER && \
mkdir -p $CONDA_DIR && \
chown $NB_USER:$NB_GID $CONDA_DIR && \
chmod g+w /etc/passwd && \
fix-permissions $HOME && \
fix-permissions $CONDA_DIR
USER $NB_UID
ARG PYTHON_VERSION=default
# Setup work directory for backward-compatibility
RUN mkdir "/home/$NB_USER/work" && \
fix-permissions "/home/$NB_USER"
# Install conda as jovyan and check the sha256 sum provided on the download site
WORKDIR /tmp
# Prerequisites installation: conda, mamba, pip, tini
RUN wget --quiet "https://github.com/conda-forge/miniforge/releases/download/${miniforge_version}/${miniforge_installer}" && \
echo "${miniforge_checksum} *${miniforge_installer}" | sha256sum --check && \
/bin/bash "${miniforge_installer}" -f -b -p $CONDA_DIR && \
rm "${miniforge_installer}" && \
# Conda configuration see https://conda.io/projects/conda/en/latest/configuration.html
echo "conda ${CONDA_VERSION}" >> $CONDA_DIR/conda-meta/pinned && \
conda config --system --set auto_update_conda false && \
conda config --system --set show_channel_urls true && \
if [ ! $PYTHON_VERSION = 'default' ]; then conda install --yes python=$PYTHON_VERSION; fi && \
conda list python | grep '^python ' | tr -s ' ' | cut -d '.' -f 1,2 | sed 's/$/.*/' >> $CONDA_DIR/conda-meta/pinned && \
conda install --quiet --yes \
"conda=${CONDA_VERSION}" \
'pip' && \
conda update --all --quiet --yes && \
conda clean --all -f -y && \
rm -rf /home/$NB_USER/.cache/yarn && \
fix-permissions $CONDA_DIR && \
fix-permissions /home/$NB_USER
# Install Jupyter Notebook, Lab, and Hub
# Generate a notebook server config
# Cleanup temporary files
# Correct permissions
# Do all this in a single RUN command to avoid duplicating all of the
# files across image layers when the permissions change
RUN conda install --quiet --yes \
'notebook=6.3.0' \
'jupyterhub=1.4.1' \
'jupyterlab=3.0.15' && \
conda clean --all -f -y && \
npm cache clean --force && \
jupyter notebook --generate-config && \
jupyter lab clean && \
rm -rf /home/$NB_USER/.cache/yarn && \
fix-permissions $CONDA_DIR && \
fix-permissions /home/$NB_USER
EXPOSE 8888
# Configure container startup
ENTRYPOINT ["tini", "-g", "--"]
CMD ["start-notebook.sh"]
# Copy local files as late as possible to avoid cache busting
COPY start.sh start-notebook.sh start-singleuser.sh /usr/local/bin/
# Currently need to have both jupyter_notebook_config and jupyter_server_config to support classic and lab
COPY jupyter_notebook_config.py /etc/jupyter/
# Fix permissions on /etc/jupyter as root
USER root
# Prepare upgrade to JupyterLab V3.0 #1205
RUN sed -re "s/c.NotebookApp/c.ServerApp/g" \
/etc/jupyter/jupyter_notebook_config.py > /etc/jupyter/jupyter_server_config.py && \
fix-permissions /etc/jupyter/
# Switch back to jovyan to avoid accidental container runs as root
USER $NB_UID
WORKDIR $HOME
This Dockerfile is jupyter/minimal-notebook
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
ARG BASE_CONTAINER=jupyter/base-notebook
FROM $BASE_CONTAINER
LABEL maintainer="Jupyter Project <jupyter#googlegroups.com>"
USER root
# Install all OS dependencies for fully functional notebook server
RUN apt-get update && apt-get install -yq --no-install-recommends \
build-essential \
vim-tiny \
git \
inkscape \
libsm6 \
libxext-dev \
libxrender1 \
lmodern \
netcat \
# ---- nbconvert dependencies ----
texlive-xetex \
texlive-fonts-recommended \
texlive-plain-generic \
# ----
tzdata \
unzip \
nano-tiny \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# Create alternative for nano -> nano-tiny
RUN update-alternatives --install /usr/bin/nano nano /bin/nano-tiny 10
# Switch back to jovyan to avoid accidental container runs as root
USER $NB_UID
This Dockerfile is jupyter/scipy-notebook
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
ARG BASE_CONTAINER=jupyter/minimal-notebook
FROM $BASE_CONTAINER
LABEL maintainer="Jupyter Project <jupyter#googlegroups.com>"
USER root
# ffmpeg for matplotlib anim & dvipng+cm-super for latex labels
RUN apt-get update && \
apt-get install -y --no-install-recommends ffmpeg dvipng cm-super && \
apt-get clean && rm -rf /var/lib/apt/lists/*
USER $NB_UID
# Install Python 3 packages
RUN conda install --quiet --yes \
'beautifulsoup4=4.9.*' \
'conda-forge::blas=*=openblas' \
'bokeh=2.3.*' \
'bottleneck=1.3.*' \
'cloudpickle=1.6.*' \
'cython=0.29.*' \
'dask=2021.4.*' \
'dill=0.3.*' \
'h5py=3.2.*' \
'ipywidgets=7.6.*' \
'ipympl=0.7.*'\
'matplotlib-base=3.4.*' \
'numba=0.53.*' \
'numexpr=2.7.*' \
'pandas=1.2.*' \
'patsy=0.5.*' \
'protobuf=3.15.*' \
'pytables=3.6.*' \
'scikit-image=0.18.*' \
'scikit-learn=0.24.*' \
'scipy=1.6.*' \
'seaborn=0.11.*' \
'sqlalchemy=1.4.*' \
'statsmodels=0.12.*' \
'sympy=1.8.*' \
'vincent=0.4.*' \
'widgetsnbextension=3.5.*'\
'xlrd=2.0.*' && \
conda clean --all -f -y && \
fix-permissions "${CONDA_DIR}" && \
fix-permissions "/home/${NB_USER}"
# Install facets which does not have a pip or conda package at the moment
WORKDIR /tmp
RUN git clone https://github.com/PAIR-code/facets.git && \
jupyter nbextension install facets/facets-dist/ --sys-prefix && \
rm -rf /tmp/facets && \
fix-permissions "${CONDA_DIR}" && \
fix-permissions "/home/${NB_USER}"
# Import matplotlib the first time to build the font cache.
ENV XDG_CACHE_HOME="/home/${NB_USER}/.cache/"
RUN MPLBACKEND=Agg python -c "import matplotlib.pyplot" && \
fix-permissions "/home/${NB_USER}"
USER $NB_UID
WORKDIR $HOME
Finally, this Dockerfile is jupyter/tensorflow-notebook
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
ARG BASE_CONTAINER=jupyter/scipy-notebook
FROM $BASE_CONTAINER
LABEL maintainer="Jupyter Project <jupyter#googlegroups.com>"
# Install Tensorflow
RUN mamba install --quiet --yes \
'tensorflow=2.4.1' && \
conda clean --all -f -y && \
fix-permissions "${CONDA_DIR}" && \
fix-permissions "/home/${NB_USER}"
I built each of these images locally (using BuildKit) with the following command:
docker build --rm --force-rm -t <TAG HERE> <FOLDER NAME HERE>
Here, the final image size of jupyter/tensorflow-notebook is 3.17 GB.
I then combined all the previous Dockerfiles into the following multi-stage build Dockerfile:
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
# Ubuntu 20.04 (focal)
# https://hub.docker.com/_/ubuntu/?tab=tags&name=focal
# OS/ARCH: linux/amd64
ARG ROOT_CONTAINER=ubuntu:focal-20210416#sha256:86ac87f73641c920fb42cc9612d4fb57b5626b56ea2a19b894d0673fd5b4f2e9
ARG BASE_CONTAINER=$ROOT_CONTAINER
FROM $BASE_CONTAINER AS base
LABEL maintainer="Jupyter Project <jupyter#googlegroups.com>"
ARG NB_USER="jovyan"
ARG NB_UID="1000"
ARG NB_GID="100"
# Fix DL4006
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
USER root
# ---- Miniforge installer ----
# Default values can be overridden at build time
# (ARGS are in lower case to distinguish them from ENV)
# Check https://github.com/conda-forge/miniforge/releases
# Conda version
ARG conda_version="4.10.1"
# Miniforge installer patch version
ARG miniforge_patch_number="0"
# Miniforge installer architecture
ARG miniforge_arch="x86_64"
# Package Manager and Python implementation to use (https://github.com/conda-forge/miniforge)
# - conda only: either Miniforge3 to use Python or Miniforge-pypy3 to use PyPy
# - conda + mamba: either Mambaforge to use Python or Mambaforge-pypy3 to use PyPy
ARG miniforge_python="Mambaforge"
# Miniforge archive to install
ARG miniforge_version="${conda_version}-${miniforge_patch_number}"
# Miniforge installer
ARG miniforge_installer="${miniforge_python}-${miniforge_version}-Linux-${miniforge_arch}.sh"
# Miniforge checksum
ARG miniforge_checksum="d4065b376f81b83cfef0c7316f97bb83337e4ae27eb988828363a578226e3a62"
# Install all OS dependencies for notebook server that starts but lacks all
# features (e.g., download as all possible file formats)
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get -q update && \
apt-get install -yq --no-install-recommends \
tini \
wget \
ca-certificates \
sudo \
locales \
fonts-liberation \
run-one && \
apt-get clean && rm -rf /var/lib/apt/lists/* && \
echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \
locale-gen
# Configure environment
ENV CONDA_DIR=/opt/conda \
SHELL=/bin/bash \
NB_USER=$NB_USER \
NB_UID=$NB_UID \
NB_GID=$NB_GID \
LC_ALL=en_US.UTF-8 \
LANG=en_US.UTF-8 \
LANGUAGE=en_US.UTF-8
ENV PATH=$CONDA_DIR/bin:$PATH \
HOME=/home/$NB_USER \
CONDA_VERSION="${conda_version}" \
MINIFORGE_VERSION="${miniforge_version}"
# Copy a script that we will use to correct permissions after running certain commands
COPY fix-permissions /usr/local/bin/fix-permissions
RUN chmod a+rx /usr/local/bin/fix-permissions
# Enable prompt color in the skeleton .bashrc before creating the default NB_USER
# hadolint ignore=SC2016
RUN sed -i 's/^#force_color_prompt=yes/force_color_prompt=yes/' /etc/skel/.bashrc && \
# Add call to conda init script see https://stackoverflow.com/a/58081608/4413446
echo 'eval "$(command conda shell.bash hook 2> /dev/null)"' >> /etc/skel/.bashrc
# Create NB_USER with name jovyan user with UID=1000 and in the 'users' group
# and make sure these dirs are writable by the `users` group.
RUN echo "auth requisite pam_deny.so" >> /etc/pam.d/su && \
sed -i.bak -e 's/^%admin/#%admin/' /etc/sudoers && \
sed -i.bak -e 's/^%sudo/#%sudo/' /etc/sudoers && \
useradd -l -m -s /bin/bash -N -u $NB_UID $NB_USER && \
mkdir -p $CONDA_DIR && \
chown $NB_USER:$NB_GID $CONDA_DIR && \
chmod g+w /etc/passwd && \
fix-permissions $HOME && \
fix-permissions $CONDA_DIR
USER $NB_UID
ARG PYTHON_VERSION=default
# Setup work directory for backward-compatibility
RUN mkdir "/home/$NB_USER/work" && \
fix-permissions "/home/$NB_USER"
# Install conda as jovyan and check the sha256 sum provided on the download site
WORKDIR /tmp
# Prerequisites installation: conda, mamba, pip, tini
RUN wget --quiet "https://github.com/conda-forge/miniforge/releases/download/${miniforge_version}/${miniforge_installer}" && \
echo "${miniforge_checksum} *${miniforge_installer}" | sha256sum --check && \
/bin/bash "${miniforge_installer}" -f -b -p $CONDA_DIR && \
rm "${miniforge_installer}" && \
# Conda configuration see https://conda.io/projects/conda/en/latest/configuration.html
echo "conda ${CONDA_VERSION}" >> $CONDA_DIR/conda-meta/pinned && \
conda config --system --set auto_update_conda false && \
conda config --system --set show_channel_urls true && \
if [ ! $PYTHON_VERSION = 'default' ]; then conda install --yes python=$PYTHON_VERSION; fi && \
conda list python | grep '^python ' | tr -s ' ' | cut -d '.' -f 1,2 | sed 's/$/.*/' >> $CONDA_DIR/conda-meta/pinned && \
conda install --quiet --yes \
"conda=${CONDA_VERSION}" \
'pip' && \
conda update --all --quiet --yes && \
conda clean --all -f -y && \
rm -rf /home/$NB_USER/.cache/yarn && \
fix-permissions $CONDA_DIR && \
fix-permissions /home/$NB_USER
# Install Jupyter Notebook, Lab, and Hub
# Generate a notebook server config
# Cleanup temporary files
# Correct permissions
# Do all this in a single RUN command to avoid duplicating all of the
# files across image layers when the permissions change
RUN conda install --quiet --yes \
'notebook=6.3.0' \
'jupyterhub=1.4.1' \
'jupyterlab=3.0.15' && \
conda clean --all -f -y && \
npm cache clean --force && \
jupyter notebook --generate-config && \
jupyter lab clean && \
rm -rf /home/$NB_USER/.cache/yarn && \
fix-permissions $CONDA_DIR && \
fix-permissions /home/$NB_USER
EXPOSE 8888
# Configure container startup
ENTRYPOINT ["tini", "-g", "--"]
CMD ["start-notebook.sh"]
# Copy local files as late as possible to avoid cache busting
COPY start.sh start-notebook.sh start-singleuser.sh /usr/local/bin/
# Currently need to have both jupyter_notebook_config and jupyter_server_config to support classic and lab
COPY jupyter_notebook_config.py /etc/jupyter/
# Fix permissions on /etc/jupyter as root
USER root
# Prepare upgrade to JupyterLab V3.0 #1205
RUN sed -re "s/c.NotebookApp/c.ServerApp/g" \
/etc/jupyter/jupyter_notebook_config.py > /etc/jupyter/jupyter_server_config.py && \
fix-permissions /etc/jupyter/
# Switch back to jovyan to avoid accidental container runs as root
USER $NB_UID
WORKDIR $HOME
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
ARG BASE_CONTAINER=jupyter/base-notebook
FROM base AS minimal
LABEL maintainer="Jupyter Project <jupyter#googlegroups.com>"
USER root
# Install all OS dependencies for fully functional notebook server
RUN apt-get update && apt-get install -yq --no-install-recommends \
build-essential \
vim-tiny \
git \
inkscape \
libsm6 \
libxext-dev \
libxrender1 \
lmodern \
netcat \
# ---- nbconvert dependencies ----
texlive-xetex \
texlive-fonts-recommended \
texlive-plain-generic \
# ----
tzdata \
unzip \
nano-tiny \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# Create alternative for nano -> nano-tiny
RUN update-alternatives --install /usr/bin/nano nano /bin/nano-tiny 10
# Switch back to jovyan to avoid accidental container runs as root
USER $NB_UID
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
ARG BASE_CONTAINER=jupyter/minimal-notebook
FROM minimal AS scipy
LABEL maintainer="Jupyter Project <jupyter#googlegroups.com>"
USER root
# ffmpeg for matplotlib anim & dvipng+cm-super for latex labels
RUN apt-get update && \
apt-get install -y --no-install-recommends ffmpeg dvipng cm-super && \
apt-get clean && rm -rf /var/lib/apt/lists/*
USER $NB_UID
# Install Python 3 packages
RUN conda install --quiet --yes \
'beautifulsoup4=4.9.*' \
'conda-forge::blas=*=openblas' \
'bokeh=2.3.*' \
'bottleneck=1.3.*' \
'cloudpickle=1.6.*' \
'cython=0.29.*' \
'dask=2021.4.*' \
'dill=0.3.*' \
'h5py=3.2.*' \
'ipywidgets=7.6.*' \
'ipympl=0.7.*'\
'matplotlib-base=3.4.*' \
'numba=0.53.*' \
'numexpr=2.7.*' \
'pandas=1.2.*' \
'patsy=0.5.*' \
'protobuf=3.15.*' \
'pytables=3.6.*' \
'scikit-image=0.18.*' \
'scikit-learn=0.24.*' \
'scipy=1.6.*' \
'seaborn=0.11.*' \
'sqlalchemy=1.4.*' \
'statsmodels=0.12.*' \
'sympy=1.8.*' \
'vincent=0.4.*' \
'widgetsnbextension=3.5.*'\
'xlrd=2.0.*' && \
conda clean --all -f -y && \
fix-permissions "${CONDA_DIR}" && \
fix-permissions "/home/${NB_USER}"
# Install facets which does not have a pip or conda package at the moment
WORKDIR /tmp
RUN git clone https://github.com/PAIR-code/facets.git && \
jupyter nbextension install facets/facets-dist/ --sys-prefix && \
rm -rf /tmp/facets && \
fix-permissions "${CONDA_DIR}" && \
fix-permissions "/home/${NB_USER}"
# Import matplotlib the first time to build the font cache.
ENV XDG_CACHE_HOME="/home/${NB_USER}/.cache/"
RUN MPLBACKEND=Agg python -c "import matplotlib.pyplot" && \
fix-permissions "/home/${NB_USER}"
USER $NB_UID
WORKDIR $HOME
# Copyright (c) Jupyter Development Team.
# Distributed under the terms of the Modified BSD License.
ARG BASE_CONTAINER=jupyter/scipy-notebook
FROM scipy AS tensorflow
LABEL maintainer="Jupyter Project <jupyter#googlegroups.com>"
# Install Tensorflow
RUN mamba install --quiet --yes \
'tensorflow=2.4.1' && \
conda clean --all -f -y && \
fix-permissions "${CONDA_DIR}" && \
fix-permissions "/home/${NB_USER}"
The size of this image is 14.61 GB, 11 GB greater than the split Dockerfiles build.
What's the reason behind this drastic increase in size?

How to grant a container privilege to run in a normal user's session in dockerfile

Once building a docker image then use its images like
docker run -ti firefox-test:latest bash then i go to /usr/bin/ to open firefox :
root#...:cd /usr/bin
root#:/usr/bin/firefox
then it prompted me with this message:
Running Firefox as root in a regular user's session is not supported. ($HOME is /home/jenkins which is owned by jenkins.)
so, looks like running it within root authentication is not supported so i decided to put the --user <uid>:<gid> param to the docker command to run it then it can be launched with the given user uid&gid provided. without that params, it went to the root session and the message above would display.
So i wonder if there's any that i can do with building Dockerfile so that Firefox can be launched with either root or given user uid&gid so that i don't need to parse the param --user <uid>:<gid> to my docker run command?
FROM debian:stretch
#==========
# Ruby 2.5.1
#==========
# cf. Dockerfile for ruby:2.3
# https://github.com/docker-library/ruby/blob/master/2.3/Dockerfile
# Install Ruby Dependencies
# cf. https://gorails.com/setup/ubuntu/16.04
RUN apt-get update \
&& rubyBuildDeps=' \
build-essential \
curl \
git-core \
libcurl4-openssl-dev \
libffi-dev \
libreadline-dev \
libsqlite3-dev \
libssl-dev \
libxml2-dev \
libxslt1-dev \
libyaml-dev \
software-properties-common \
sqlite3 \
wget \
zlib1g-dev \
' \
&& apt-get install -y --no-install-recommends $rubyBuildDeps \
&& rm -rf /var/lib/apt/lists/*
# skip installing gem documentation
RUN mkdir -p /usr/local/etc \
&& { \
echo 'install: --no-document'; \
echo 'update: --no-document'; \
} >> /usr/local/etc/gemrc
ENV RUBY_MAJOR 2.5
ENV RUBY_VERSION 2.5.1
ENV RUBY_DOWNLOAD_SHA256 dac81822325b79c3ba9532b048c2123357d3310b2b40024202f360251d9829b1
ENV RUBYGEMS_VERSION 2.7.7
# some of ruby's build scripts are written in ruby
# we purge system ruby later to make sure our final image uses what we just built
RUN set -ex \
\
&& buildDeps=' \
bison \
libgdbm-dev \
ruby \
' \
&& apt-get update \
&& apt-get install -y --no-install-recommends $buildDeps \
&& rm -rf /var/lib/apt/lists/* \
\
&& wget -O ruby.tar.gz "https://cache.ruby-lang.org/pub/ruby/${RUBY_MAJOR%-rc}/ruby-$RUBY_VERSION.tar.gz" \
&& echo "$RUBY_DOWNLOAD_SHA256 *ruby.tar.gz" | sha256sum -c - \
\
&& mkdir -p /usr/src/ruby \
&& tar -xzf ruby.tar.gz -C /usr/src/ruby --strip-components=1 \
&& rm ruby.tar.gz \
\
&& cd /usr/src/ruby \
\
# hack in "ENABLE_PATH_CHECK" disabling to suppress:
# warning: Insecure world writable dir
&& { \
echo '#define ENABLE_PATH_CHECK 0'; \
echo; \
cat file.c; \
} > file.c.new \
&& mv file.c.new file.c \
\
&& ./configure --disable-install-doc \
&& make -j"$(nproc)" \
&& make install \
\
&& apt-get purge -y --auto-remove $buildDeps \
&& cd / \
&& rm -r /usr/src/ruby \
\
&& gem update --system "$RUBYGEMS_VERSION"
ENV BUNDLER_VERSION 2.0.1
RUN gem install bundler --version "$BUNDLER_VERSION" --force
# install things globally, for great justice
# and don't create ".bundle" in all our apps
ENV GEM_HOME /usr/local/bundle
ENV BUNDLE_PATH="$GEM_HOME" \
BUNDLE_BIN="$GEM_HOME/bin" \
BUNDLE_SILENCE_ROOT_WARNING=1 \
BUNDLE_APP_CONFIG="$GEM_HOME"
ENV PATH $GEM_HOME/bin:$BUNDLE_PATH/gems/bin:$PATH
RUN mkdir -p "$GEM_HOME" "$BUNDLE_BIN" \
&& chmod 777 "$GEM_HOME" "$BUNDLE_BIN"
#===============
# Gecko Driver
#===============
ENV GECKODRIVER_VERSION 0.24.0
RUN apt-get update && \
apt-get install -y --no-install-recommends unzip && \
apt-get install -y --no-install-recommends \
bzip2 \
libgconf-2-4 \
libglib2.0-dev \
libnss3-dev \
libxi6 \
xvfb \
&& \
rm -rf /var/lib/apt/lists/* \
/var/cache/apt/* && \
wget https://github.com/mozilla/geckodriver/releases/download/v$GECKODRIVER_VERSION/geckodriver-v$GECKODRIVER_VERSION-linux64.tar.gz && \
tar -zxvf geckodriver-v$GECKODRIVER_VERSION-linux64.tar.gz && \
mv geckodriver /usr/local/bin/ && \
chmod +x /usr/local/bin/geckodriver && \
rm geckodriver-v$GECKODRIVER_VERSION-linux64.tar.gz && \
apt-get purge -y --auto-remove bzip2
#==========
# Firefox
#==========
ENV FF_LANG="en-US" \
FF_BASE_URL="https://archive.mozilla.org/pub" \
FF_PLATFORM="linux-x86_64" \
FF_INNER_PATH="firefox/releases" \
FF_VERSION="67.0"
ENV FF_COMP="firefox-${FF_VERSION}.tar.bz2"
ENV FF_URL="${FF_BASE_URL}/${FF_INNER_PATH}/${FF_VERSION}/${FF_PLATFORM}/${FF_LANG}/${FF_COMP}"
RUN apt-get update && \
apt-get install -y --no-install-recommends unzip && \
apt-get install -y --no-install-recommends \
bzip2 \
&& \
rm -rf /var/lib/apt/lists/* /var/cache/apt/* && \
wget "${FF_URL}" \
-O /tmp/firefox-linux.tar.bz2 && \
tar -xvf /tmp/firefox-linux.tar.bz2 -C /opt && \
ln -s /opt/firefox/firefox /usr/bin/firefox && \
chmod +x /usr/bin/firefox && \
rm /tmp/firefox-linux.tar.bz2 && \
apt-get purge -y --auto-remove bzip2
#---------------------------
# Dependencies for headless
#---------------------------
RUN apt-get update && \
headlessDeps=' \
imagemagick \
xvfb \
' && \
apt-get install -y --no-install-recommends $headlessDeps && \
rm -rf /var/lib/apt/lists/* /var/cache/apt/*
#========================
# Font for Chinese
#========================
RUN apt-get update && \
apt-get install -y --no-install-recommends fonts-arphic-ukai fonts-arphic-uming && \
rm -rf /var/lib/apt/lists/* /var/cache/apt/*
#==============================================
# On advice from:
# https://github.com/SeleniumHQ/docker-selenium/issues/87
ENV DBUS_SESSION_BUS_ADDRESS /dev/null
#==============================================
# Jenkins Agent
#==============================================
# For the Amazon EC2 Container Service Plugin,
# we need the Docker image with an entryPoint which behaves as a
# Jenkins Agent.
# Instal OpenJDK-11
# cf. https://xmoexdev.com/wordpress/installing-openjdk-9-debian-stretch/
RUN echo deb http://http.debian.net/debian stretch-backports main >> /etc/apt/sources.list.d/stretch-backports.list && \
apt-get update && \
apt-get install -y --no-install-recommends -t stretch-backports openjdk-8-jdk && \
rm -rf /var/lib/apt/lists/* /var/cache/apt/*
# Adapted from Docker Image: jenkins/slave
# https://hub.docker.com/r/jenkins/slave/~/dockerfile/
ARG user=jenkins
ARG group=jenkins
ARG uid=10000
ARG gid=10000
ENV HOME /home/${user}
RUN groupadd -g ${gid} ${group}
RUN useradd -c "Jenkins user" -d $HOME -u ${uid} -g ${gid} -m ${user}
ARG JENKINS_REMOTING_VERSION=3.20
ARG AGENT_WORKDIR=/home/${user}/agent
RUN curl --create-dirs -sSLo /usr/share/jenkins/slave.jar \
https://repo.jenkins-ci.org/public/org/jenkins-ci/main/remoting/${JENKINS_REMOTING_VERSION}/remoting-${JENKINS_REMOTING_VERSION}.jar \
&& chmod 755 /usr/share/jenkins \
&& chmod 644 /usr/share/jenkins/slave.jar
# USER ${user}
ENV AGENT_WORKDIR=${AGENT_WORKDIR}
RUN mkdir /home/${user}/.jenkins && mkdir -p ${AGENT_WORKDIR}
VOLUME /home/${user}/.jenkins
VOLUME ${AGENT_WORKDIR}
# WORKDIR /home/${user}
# Docker Image: jenkins/jnlp-slave
# https://hub.docker.com/r/jenkinsci/jnlp-slave/~/dockerfile/
# uses the script from its git repository:
# https://github.com/jenkinsci/docker-jnlp-slave
# Docker builder can't copy files under 'config' directory,
# so keep jenkins-slave in top-level dir
COPY jenkins-slave /usr/local/bin/jenkins-slave
ENTRYPOINT ["jenkins-slave"]

PHP 7.1 Docker build will not compile with pgsql

I am using the polinux/httpd:centos repo to run Apache and PHP 7.1. The build seems to go ok. There are a few warnings related to keys, but none related to php or pgsql. The build completes successfully, but when I ssh into the container the module is not listed (php -m) and there's no extension config file in php.d.
I verified it is listed in the Dockerfile multiple times.
I can install php71-php-pgsql manually after starting the container, but then I can't restart Apache without restarting the container.
I've tried moving yum install php71-php-pgsql to the end of the Dockerfile as a separate RUN command (in addition to the original), but it reports it has already been installed, yet when I ssh into the container its not listed in the modules and no config, as mentioned above.
When I rebuild a container I stop and remove it, then run build with the no-cache option.
I'm stumped...
The Dockerfile is quite long, but I can post if that would be helpful.
Thanks.
UPDATE: Dockerfile per request...
FROM polinux/httpd:centos
ENV \
NVM_DIR="/usr/local/nvm" \
NODE_VERSION="9.2.0" \
GIT_VERSION="2.15.0" \
PHP_VERSION="71"
ADD mariadb.repo /etc/yum.repos.d/mariadb.repo
RUN \
rpm --rebuilddb && yum clean all && rm -rf /var/cache/yum && \
yum update -y && \
yum install -y \
wget \
patch \
bzip2 \
unzip \
make \
openssh-clients \
git \
MariaDB-client && \
rpm -Uvh http://rpms.remirepo.net/enterprise/remi-release-7.rpm && \
yum install -y \
php${PHP_VERSION}-php \
php${PHP_VERSION}-php-bcmath \
php${PHP_VERSION}-php-cli \
php${PHP_VERSION}-php-common \
php${PHP_VERSION}-php-devel \
php${PHP_VERSION}-php-fpm \
php${PHP_VERSION}-php-gd \
php${PHP_VERSION}-php-gmp \
php${PHP_VERSION}-php-intl \
php${PHP_VERSION}-php-json \
php${PHP_VERSION}-php-mbstring \
php${PHP_VERSION}-php-mcrypt \
php${PHP_VERSION}-php-mysqlnd \
php${PHP_VERSION}-php-pgsql \
php${PHP_VERSION}-php-opcache \
php${PHP_VERSION}-php-pdo \
php${PHP_VERSION}-php-pear \
php${PHP_VERSION}-php-process \
php${PHP_VERSION}-php-pspell \
php${PHP_VERSION}-php-xml \
php${PHP_VERSION}-php-pecl-imagick \
php${PHP_VERSION}-php-pecl-mysql \
php${PHP_VERSION}-php-pecl-uploadprogress \
php${PHP_VERSION}-php-pecl-uuid \
php${PHP_VERSION}-php-pecl-memcache \
php${PHP_VERSION}-php-pecl-memcached \
php${PHP_VERSION}-php-pecl-redis \
php${PHP_VERSION}-php-pecl-zip && \
ln -sfF /opt/remi/php${PHP_VERSION}/enable /etc/profile.d/php${PHP_VERSION}-paths.sh && \
ln -sfF /opt/remi/php${PHP_VERSION}/root/usr/bin/{pear,pecl,phar,php,php-cgi,php-config,phpize} /usr/local/bin/. && \
mv -f /etc/opt/remi/php${PHP_VERSION}/php.ini /etc/php.ini && ln -s /etc/php.ini /etc/opt/remi/php${PHP_VERSION}/php.ini && \
rm -rf /etc/php.d && mv /etc/opt/remi/php${PHP_VERSION}/php.d /etc/. && ln -s /etc/php.d /etc/opt/remi/php${PHP_VERSION}/php.d && \
yum install -y \
ImageMagick \
GraphicsMagick \
gcc \
gcc-c++ \
libffi-devel \
libpng-devel \
zlib-devel && \
yum install -y ruby ruby-devel && \
echo 'gem: --no-document' > /etc/gemrc && \
gem update --system && \
gem install bundler && \
export PROFILE=/etc/profile.d/nvm.sh && touch $PROFILE && \
curl -sSL https://raw.githubusercontent.com/creationix/nvm/v0.31.2/install.sh | bash && \
source $NVM_DIR/nvm.sh && \
nvm install $NODE_VERSION && \
nvm alias default $NODE_VERSION && \
nvm use default && \
npm install -g \
gulp \
grunt-cli \
bower \
browser-sync && \
echo -e "StrictHostKeyChecking no" >> /etc/ssh/ssh_config && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer && \
chown apache /usr/local/bin/composer && composer --version && \
yum clean all && rm -rf /tmp/yum* && \
sed -i 's|SetHandler application/x-httpd-php|SetHandler "proxy:fcgi://127.0.0.1:9000"|g' /etc/httpd/conf.d/php${PHP_VERSION}-php.conf
ADD container-files /
ENV \
NODE_PATH=$NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules \
PATH=$NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
RUN \
mkdir -p /data/tmp/php && \
chmod -R 777 /data/tmp
# Weird issue: For some reason pgsql is not installed above. May be OoO...
Manually installing worked, so adding here at the end.
RUN \
yum install php71-php-pgsql -y
Chalk this up to inexperience...
This turned out to be a problem with how I was referencing images, first when building and then running the instance. I'm still not sure I fully understand, but I think I was running the base container instead of the modified build.
For others that may run into similar problems, the 2 commands that helped me get this working are:
docker build --rm -t local/httpd-php71 .
... and then ...
docker run \
-d \
--name httpd-php71 \
--restart unless-stopped \
--net dockersubnet \
--volume /www:/var/www \
local/httpd-php71
Where 'local/httpd-php71' is my local/custom build. Before I was not using any tag reference in the build command and then I was referencing 'polinux/httpd:centos', the base, in the run command.
Thanks.

How do i clean up / minimize the storage usage of my container from building opencv in dockerfile

Given my docker file
FROM nvidia/cuda:8.0-cudnn6-devel-ubuntu14.04
RUN apt-get update
# disable interactive functions
ENV DEBIAN_FRONTEND noninteractive
#################Install MiniConda and other dependencies##########
ENV CONDA_DIR /opt/conda
ENV PATH $CONDA_DIR/bin:$PATH
ENV OPENBLAS_NUM_THREADS $(nproc)
RUN mkdir -p $CONDA_DIR && \
echo export PATH=$CONDA_DIR/bin:'$PATH' > /etc/profile.d/conda.sh && \
apt-get update -y && \
apt-get install -y \
wget \
git \
g++ \
graphviz \
software-properties-common \
python-software-properties \
python3-dev \
libhdf5-dev \
libopenblas-dev \
liblapack-dev \
libblas-dev \
libtbb2 \
libtbb-dev && \
apt-get -qq install -y \
build-essential \
unzip \
cmake \
pkg-config \
libjpeg8-dev \
libtiff4-dev \
libjasper-dev \
libpng12-dev \
libgtk2.0-dev \
libavcodec-dev \
libavformat-dev \
libswscale-dev \
libv4l-dev \
libatlas-base-dev \
gfortran && \
rm -rf /var/lib/apt/lists/* && \
wget --quiet https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
/bin/bash /Miniconda3-latest-Linux-x86_64.sh -f -b -p $CONDA_DIR && \
rm Miniconda3-latest-Linux-x86_64.sh
#########################MPI###########################
RUN cd /tmp && \
wget "https://www.open-mpi.org/software/ompi/v2.1/downloads/openmpi-2.1.1.tar.gz" && \
tar xzf openmpi-2.1.1.tar.gz && \
cd openmpi-2.1.1 && \
./configure --with-cuda && make -j"$(nproc)" install # && ldconfig
#######################NCCL###########################
ENV CPATH /usr/local/cuda/include:/usr/local/include:$CPATH
RUN cd /usr/local && git clone https://github.com/NVIDIA/nccl.git && cd nccl && \
sed -i '/NVCC_GENCODE ?=/a \ -gencode=arch=compute_30,code=sm_30 \\' Makefile && \
make CUDA_HOME=/usr/local/cuda -j"$(nproc)" && \
make install && ldconfig
######### Compile for devices with cuda compute compatibility 3 (e.g. GRID K520 on aws) second line above
# UNCOMMENT line below to compile for GPUs with cuda compute compatibility 3.0
# sed -i '/NVCC_GENCODE ?=/a \ -gencode=arch=compute_30,code=sm_30 \\' Makefile && \
##########
###################Setup User##########################
ENV NB_USER chainer
ENV NB_UID 1000
RUN useradd -m -s /bin/bash -N -u $NB_UID $NB_USER -g sudo -p $(perl -e'print crypt("chainer", "aa")') && \
mkdir -p $CONDA_DIR && \
chown chainer $CONDA_DIR -R && \
mkdir -p /src && \
chown -R chainer /src
# mkdir -p /opencv && \
# chown -R chainer /opencv && \
# mkdir -p /opencv_contrib && \
# chown -R chainer /opencv_contrib
####################Python 3#########################
ARG python_version=3.5.2
RUN conda install -y python=${python_version} && \
pip install -U pip && \
conda install Pillow scikit-learn notebook pandas matplotlib mkl nose pyyaml six h5py && \
pip install numpy && \
pip install chainer && \
pip install chainercv && \
# pip install opencv-python && \
pip install pyyaml && \
conda clean -yt
# pip install -U pip && \
#
# conda install Pillow scikit-learn notebook pandas matplotlib mkl nose pyyaml six h5py && \
#
#
# pip install mpi4py && \
# pip install cython && \
#
# pip install chainer && \
# pip install chainercv && \
# pip install chainermn && \
ENV PYTHONPATH $CONDA_DIR/lib/python3.5/site-packages/:$PYTHONPATH
ENV PYTHONPATH /src/:$PYTHONPATH
WORKDIR /src
##OPENCV##
RUN wget https://github.com/Itseez/opencv/archive/3.3.0.zip -O opencv3.zip && \
unzip -q opencv3.zip && mv /src/opencv-3.3.0 /opencv && \
wget https://github.com/Itseez/opencv_contrib/archive/3.3.0.zip -O opencv_contrib3.zip && \
unzip -q opencv_contrib3.zip && mv /src/opencv_contrib-3.3.0 /opencv_contrib
RUN rm -r -f /opencv/build && mkdir /opencv/build && \
cd /opencv/build && \
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D BUILD_PYTHON_SUPPORT=ON \
-D CMAKE_INSTALL_PREFIX=$CONDA_DIR \
-D PYTHON3_LIBRARY=$CONDA_DIR/lib/python3.5 \
-D PYTHON3_INCLUDE_DIRS=$CONDA_DIR/include/python3.5m \
-D PYTHON3_EXECUTABLE=$CONDA_DIR/bin/python3 \
-D PYTHON3_PACKAGES_PATH=$CONDA_DIR/lib/python3.5/site-packages \
-D WITH_TBB=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D INSTALL_PYTHON_EXAMPLES=OFF \
-D OPENCV_EXTRA_MODULES_PATH=/opencv_contrib/modules \
-D BUILD_EXAMPLES=OFF \
-D BUILD_NEW_PYTHON_SUPPORT=ON \
-D WITH_IPP=OFF \
-D WITH_V4L=ON .. && \
make -j"$(nproc)" && \
make install && \
ldconfig
######################################################
USER chainer
It ends up at whooping 15GB in size.
What can I do to reduce the size in terms of all the selfbuild stuff happening? can some be deleted again?

Resources