A Dockerfile with 2 ENTRYPOINT - docker

I am learning about docker, specificially how to write docker file. Recently I came across this one and couldn't understand why there are 2 ENTRYPOINT in it.
The original file is in this link CosmWasm/rust-optimizer Dockerfile. The code bellow is its current actual content.
FROM rust:1.60.0-alpine as targetarch
ARG BUILDPLATFORM
ARG TARGETPLATFORM
ARG TARGETARCH
ARG BINARYEN_VERSION="version_105"
RUN echo "Running on $BUILDPLATFORM, building for $TARGETPLATFORM"
# AMD64
FROM targetarch as builder-amd64
ARG ARCH="x86_64"
# ARM64
FROM targetarch as builder-arm64
ARG ARCH="aarch64"
# GENERIC
FROM builder-${TARGETARCH} as builder
# Download binaryen sources
ADD https://github.com/WebAssembly/binaryen/archive/refs/tags/$BINARYEN_VERSION.tar.gz /tmp/binaryen.tar.gz
# Extract and compile wasm-opt
# Adapted from https://github.com/WebAssembly/binaryen/blob/main/.github/workflows/build_release.yml
RUN apk update && apk add build-base cmake git python3 clang ninja
RUN tar -xf /tmp/binaryen.tar.gz
RUN cd binaryen-version_*/ && cmake . -G Ninja -DCMAKE_CXX_FLAGS="-static" -DCMAKE_C_FLAGS="-static" -DCMAKE_BUILD_TYPE=Release -DBUILD_STATIC_LIB=ON && ninja wasm-opt
# Run tests
RUN cd binaryen-version_*/ && ninja wasm-as wasm-dis
RUN cd binaryen-version_*/ && python3 check.py wasm-opt
# Install wasm-opt
RUN strip binaryen-version_*/bin/wasm-opt
RUN mv binaryen-version_*/bin/wasm-opt /usr/local/bin
# Check cargo version
RUN cargo --version
# Check wasm-opt version
RUN wasm-opt --version
# Download sccache and verify checksum
ADD https://github.com/mozilla/sccache/releases/download/v0.2.15/sccache-v0.2.15-$ARCH-unknown-linux-musl.tar.gz /tmp/sccache.tar.gz
RUN sha256sum /tmp/sccache.tar.gz | egrep '(e5d03a9aa3b9fac7e490391bbe22d4f42c840d31ef9eaf127a03101930cbb7ca|90d91d21a767e3f558196dbd52395f6475c08de5c4951a4c8049575fa6894489)'
# Extract and install sccache
RUN tar -xf /tmp/sccache.tar.gz
RUN mv sccache-v*/sccache /usr/local/bin/sccache
RUN chmod +x /usr/local/bin/sccache
# Check sccache version
RUN sccache --version
# Add scripts
ADD optimize.sh /usr/local/bin/optimize.sh
RUN chmod +x /usr/local/bin/optimize.sh
ADD optimize_workspace.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/optimize_workspace.sh
# Being required for gcc linking of build_workspace
RUN apk add --no-cache musl-dev
ADD build_workspace build_workspace
RUN cd build_workspace && \
echo "Installed targets:" && (rustup target list | grep installed) && \
export DEFAULT_TARGET="$(rustc -vV | grep 'host:' | cut -d' ' -f2)" && echo "Default target: $DEFAULT_TARGET" && \
# Those RUSTFLAGS reduce binary size from 4MB to 600 KB
RUSTFLAGS='-C link-arg=-s' cargo build --release && \
ls -lh target/release/build_workspace && \
(ldd target/release/build_workspace || true) && \
mv target/release/build_workspace /usr/local/bin
#
# base-optimizer
#
FROM rust:1.60.0-alpine as base-optimizer
# Being required for gcc linking
RUN apk update && \
apk add --no-cache musl-dev
# Setup Rust with Wasm support
RUN rustup target add wasm32-unknown-unknown
# Add wasm-opt
COPY --from=builder /usr/local/bin/wasm-opt /usr/local/bin
#
# rust-optimizer
#
FROM base-optimizer as rust-optimizer
# Use sccache. Users can override this variable to disable caching.
COPY --from=builder /usr/local/bin/sccache /usr/local/bin
ENV RUSTC_WRAPPER=sccache
# Assume we mount the source code in /code
WORKDIR /code
# Add script as entry point
COPY --from=builder /usr/local/bin/optimize.sh /usr/local/bin
ENTRYPOINT ["optimize.sh"]
# Default argument when none is provided
CMD ["."]
#
# workspace-optimizer
#
FROM base-optimizer as workspace-optimizer
# Assume we mount the source code in /code
WORKDIR /code
# Add script as entry point
COPY --from=builder /usr/local/bin/optimize_workspace.sh /usr/local/bin
COPY --from=builder /usr/local/bin/build_workspace /usr/local/bin
ENTRYPOINT ["optimize_workspace.sh"]
# Default argument when none is provided
CMD ["."]
According to this Document, only the last ENTRYPOINT will have effect. But those 2 are in 2 different base docker images, so in any special case, will those 2 ENTRYPOINT have effect or this is just a bug?

You can keep replacing the entry point down the file, however, that's a multi-stage docker file. so if you build a given stage then you'll get a different entry point.
For example:
docker build --target rust-optimizer .
will build up to and including that stage which when run will run optimize.sh .
however
docker build .
which when run will run optimize_workspace.sh .

Related

Error while importing torch inside a Docker image inside a VM

What I have:
I have set up an Ubuntu VM using Vagrant. Inside this VM, I want to build a Docker Image, which should run some services, which will be connected to some clients outside the VM. This structure is fixed and cannot be changed. One of the Docker images is using ML frameworks, namely tensorflow and pytorch. The source code to be executed inside the Docker image is bundled using pyInstaller. The building and bundling works perfectly.
But, if I try to run the built Docker image, I get the following error message:
[1] WARNING: file already exists but should not: /tmp/_MEIl2gg3t/torch/_C.cpython-37m-x86_64-linux-gnu.so
[1] WARNING: file already exists but should not: /tmp/_MEIl2gg3t/torch/_dl.cpython-37m-x86_64-linux-gnu.so
['/tmp/_MEIl2gg3t/base_library.zip', '/tmp/_MEIl2gg3t/lib-dynload', '/tmp/_MEIl2gg3t']
[8] Failed to execute script '__main__' due to unhandled exception!
Traceback (most recent call last):
File "__main__.py", line 4, in <module>
File "PyInstaller/loader/pyimod03_importers.py", line 495, in exec_module
File "app.py", line 6, in <module>
File "PyInstaller/loader/pyimod03_importers.py", line 495, in exec_module
File "controller.py", line 3, in <module>
File "PyInstaller/loader/pyimod03_importers.py", line 495, in exec_module
File "torch/__init__.py", line 199, in <module>
ImportError: librt.so.1: cannot open shared object file: No such file or directory
Dockerfile
ARG PRJ=unspecified
ARG PYINSTALLER_ARGS=
ARG LD_LIBRARY_PATH_EXTENSION=
ARG PYTHON_VERSION=3.7
###############################################################################
# Stage 1: BUILD PyInstaller
###############################################################################
# Alpine:
#FROM ... as build-pyinstaller
# Ubuntu:
FROM ubuntu:18.04 as build-pyinstaller
ARG PYTHON_VERSION
# Ubuntu:
RUN apt-get update && apt-get install -y \
python$PYTHON_VERSION \
python$PYTHON_VERSION-dev \
python3-pip \
unzip \
# Ubuntu+Alpine:
libc-dev \
g++ \
git
# Make our Python version the default
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python$PYTHON_VERSION 1 && python3 --version
# Alpine:
#
# # Install pycrypto so --key can be used with PyInstaller
# RUN pip install \
# pycrypto
# Install PyInstaller
RUN python3 -m pip install --proxy=${https_proxy} --no-cache-dir \
pyinstaller
###############################################################################
# Stage 2: BUILD our service with Python and pyinstaller
###############################################################################
FROM build-pyinstaller
# Upgrade pip and setuptools
RUN python3 -m pip install --no-cache-dir --upgrade \
pip \
setuptools
# Install pika and protobuf here as they will be required by all our services,
# and installing in every image would take more time.
# If they should no longer be required everywhere, we could instead create
# with-pika and with-protobuf images and copy the required, installed libraries
# to the final build image (similar to how it is done in cpp).
RUN python3 -m pip install --no-cache-dir \
pika \
protobuf
# Add "worker" user to avoid running as root (used in the "run" image below)
# Alpine:
#RUN adduser -D -g "" worker
# Ubuntu:
RUN adduser --disabled-login --gecos "" worker
RUN mkdir -p /opt/export/home/worker && chown -R worker /opt/export/home/worker
ENV HOME /home/worker
# Copy /etc/passwd and /etc/group to the export directory so that they will be installed in the final run image
# (this makes the "worker" user available there; adduser is not available in "FROM scratch").
RUN export-install \
/etc/passwd \
/etc/group
# Create tmp directory that may be required in the runner image
RUN mkdir /opt/export/install/tmp && chmod ogu+rw /opt/export/install/tmp
# When using this build-parent ("FROM ..."), the following ONBUILD commands are executed.
# Files from pre-defined places in the local project directory are copied to the image (see below for details).
# Use the PRJ and MAIN_MODULE arguments that have to be set in the individual builder image that uses this image in FROM ...
ONBUILD ARG PRJ
ONBUILD ENV PRJ=embedded.adas.emergencybreaking
ONBUILD WORKDIR /opt/prj/embedded.adas.emergencybreaking/
# "prj" must contain all files that are required for building the Python app.
# This typically contains a requirements.txt - in this step we only copy requirements.txt
# so that "pip install" is not run after every source file change.
ONBUILD COPY pr[j]/requirements.tx[t] /opt/prj/embedded.adas.emergencybreaking/
# Install required python dependencies for our service - the result stored in a separate image layer
# which is used as cache in the next build even if the source files were changed (those are copied in one of the next steps).
ONBUILD RUN python3 -m pip install --no-cache-dir -r /opt/prj/embedded.adas.emergencybreaking/requirements.txt
# Install all linux packages that are listed in /opt/export/build/opt/prj/*/install-packages.txt
# and /opt/prj/*/install-packages.txt
ONBUILD COPY .placeholder pr[j]/install-packages.tx[t] /opt/prj/embedded.adas.emergencybreaking/
ONBUILD RUN install-build-packages
# "prj" must contain all files that are required for building the Python app.
# This typically contains a dependencies/lib directory - in this step we only copy that directory
# so that "pip install" is not run after every source file change.
ONBUILD COPY pr[j]/dependencie[s]/li[b] /opt/prj/embedded.adas.emergencybreaking/dependencies/lib
# .egg/.whl archives can contain binary .so files which can be linked to system libraries.
# We need to copy the system libraries that are linked from .so files in .whl/.egg packages.
# (Maybe Py)
ONBUILD RUN \
for lib_file in /opt/prj/embedded.adas.emergencybreaking/dependencies/lib/*.whl /opt/prj/embedded.adas.emergencybreaking/dependencies/lib/*.egg; do \
if [ -e "$lib_file" ]; then \
mkdir -p /tmp/lib; \
cd /tmp/lib; \
unzip $lib_file "*.so"; \
find /tmp/lib -iname "*.so" -exec ldd {} \; ; \
linked_libs=$( ( find /tmp/lib -iname "*.so" -exec get-linked-libs {} \; ) | egrep -v "^/tmp/lib/" ); \
export-install $linked_libs; \
cd -; \
rm -rf /tmp/lib; \
fi \
done
# Install required python dependencies for our service - the result is stored in a separate image layer
# which can be used as cache in the next build even if the source files are changed (those are copied in one of the next steps).
ONBUILD RUN \
for lib_file in /opt/prj/embedded.adas.emergencybreaking/dependencies/lib/*.whl; do \
[ -e "$lib_file" ] || continue; \
\
echo "python3 -m pip install --no-cache-dir $lib_file" && \
python3 -m pip install --no-cache-dir $lib_file; \
done
ONBUILD RUN \
for lib_file in /opt/prj/embedded.adas.emergencybreaking/dependencies/lib/*.egg; do \
[ -e "$lib_file" ] || continue; \
\
# Note: This will probably not work any more as easy_install is no longer contained in setuptools!
echo "python3 -m easy_install $lib_file" && \
python3 -m easy_install $lib_file; \
done
# Copy the rest of the prj directory.
ONBUILD COPY pr[j] /opt/prj/embedded.adas.emergencybreaking/
# Show what files we are working on
ONBUILD RUN find /opt/prj/embedded.adas.emergencybreaking/ -type f
# Create an executable with PyInstaller so that python does not need to be installed in the "run" image.
# This produces a lot of error messages like this:
# Error relocating /usr/local/lib/python3.8/lib-dynload/_uuid.cpython-38-x86_64-linux-gnu.so: PyModule_Create2: symbol not found
# If the reported functions/symbols are called from our python service, the missing dependencies probably have to be installed.
ONBUILD ARG PYINSTALLER_ARGS
ONBUILD ENV PYINSTALLER_ARGS=${PYINSTALLER_ARGS}
ONBUILD ARG LD_LIBRARY_PATH_EXTENSION
ONBUILD ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${LD_LIBRARY_PATH_EXTENSION}
ONBUILD RUN mkdir -p /usr/lib64 # Workaround for FileNotFoundError: [Errno 2] No such file or directory: '/usr/lib64' from pyinstaller
ONBUILD RUN \
apt-get update && \
apt-get install -y \
libgl1-mesa-glx \
libx11-xcb1 && \
apt-get clean all && \
rm -r /var/lib/apt/lists/* && \
echo "LD_LIBRARY_PATH=${LD_LIBRARY_PATH}" && \
echo "pyinstaller -p /opt/prj/embedded.adas.emergencybreaking/src -p /opt/prj/embedded.adas.emergencybreaking/dependencies/src -p /usr/local/lib/python3.7/dist-packages --hidden-import=torch --hidden-import=torchvision --onefile ${PYINSTALLER_ARGS} /opt/prj/embedded.adas.emergencybreaking/src/adas_emergencybreaking/__main__.py" && \
pyinstaller -p /opt/prj/embedded.adas.emergencybreaking/src -p /opt/prj/embedded.adas.emergencybreaking/dependencies/src -p /usr/local/lib/python3.7/dist-packages --hidden-import=torch --hidden-import=torchvision --onefile ${PYINSTALLER_ARGS} /opt/prj/embedded.adas.emergencybreaking/src/adas_emergencybreaking/__main__.py ; \
# Maybe we will need to add additional paths with -p ...
# Copy the runnable to our default location /opt/run/app
ONBUILD RUN mkdir -p /opt/run && \
cp -p -v /opt/prj/embedded.adas.emergencybreaking/dist/__main__ /opt/run/app
# Show linked libraries (as static linking does not work yet these have to be copied to the "run" image below)
#ONBUILD RUN get-linked-libs /usr/local/lib/libpython*.so.*
#ONBUILD RUN get-linked-libs /opt/run/app
# Add the executable and all linked libraries to the export/install directory
# so that they will be copied to the final "run" image
ONBUILD RUN export-install $( get-linked-libs /opt/run/app )
# Show what we have produced
ONBUILD RUN find /opt/export -type f
The requirements.txt, which is used to install my dependencies looks like this:
numpy
tensorflow-cpu
matplotlib
--find-links https://download.pytorch.org/whl/torch_stable.html
torch==1.11.0+cpu
--find-links https://download.pytorch.org/whl/torch_stable.html
torchvision==0.12.0+cpu
Is there anything obviously wrong here?

How to build a custom image using 'python:alpine' for use with AWS Lambda?

This page describes creating a Docker image for use with Lambda using 'python:buster'
https://docs.aws.amazon.com/lambda/latest/dg/images-create.html#images-create-from-alt
I would like to do the same with 'python:alpine'
but get problems when trying to install 'libcurl4-openssl-dev'
Has anyone successfully built a 'python:alpine' image for use in lambda?
This package "libcurl4-openssl-dev" belongs to debian/ubuntu family which is not exist in Alpine linux distro but as only libcurl.
Btw you can search Alpine packages from here https://pkgs.alpinelinux.org/packages
If you want to achieve a custom Lambda Python runtime with ALPINE then this Dockerfile might useful.
I did slight modifications to fit into the alpine linux world.
# Define function directory
ARG FUNCTION_DIR="/function"
FROM python:alpine3.12 AS python-alpine
RUN apk add --no-cache \
libstdc++
FROM python-alpine as build-image
# Install aws-lambda-cpp build dependencies
RUN apk add --no-cache \
build-base \
libtool \
autoconf \
automake \
libexecinfo-dev \
make \
cmake \
libcurl
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Copy function code
COPY app/* ${FUNCTION_DIR}
# Install the runtime interface client
RUN python -m pip install --upgrade pip
RUN python -m pip install \
--target ${FUNCTION_DIR} \
awslambdaric
# Multi-stage build: grab a fresh copy of the base image
FROM python-alpine
# Include global arg in this stage of the build
ARG FUNCTION_DIR
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}
# Copy in the build image dependencies
COPY --from=build-image ${FUNCTION_DIR} ${FUNCTION_DIR}
ENTRYPOINT [ "/usr/local/bin/python", "-m", "awslambdaric" ]
CMD [ "app.handler" ]

version `GLIBC_2.29' not found

I am basing my dockerfile on the rust base image.
When deploying my image to an azure container, I receive this log:
./bot: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./bot)
./bot is my application.
The error also occurs when I perform docker run on my Linux Mint desktop.
How can I get GLIBC into my container?
Dockerfile
FROM rust:1.50
WORKDIR /usr/vectorizer/
COPY ./Cargo.toml /usr/vectorizer/Cargo.toml
COPY ./target/release/trampoline /usr/vectorizer/trampoline
COPY ./target/release/bot /usr/vectorizer/bot
COPY ./target/release/template.svg /usr/vectorizer/template.svg
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y musl-tools && \
rustup target add x86_64-unknown-linux-musl
CMD ["./trampoline"]
Now I don't totally understand the dependencies of your particular project but the below Dockerfile should get you started.
What you want to do is compile in an image that has all of your dev dependencies and then move the build artifacts to a much smaller (but compatible) image.
FROM rust:1.50 as builder
RUN USER=root
RUN mkdir bot
WORKDIR /bot
ADD . ./
RUN cargo clean && \
cargo build -vv --release
FROM debian:buster-slim
ARG APP=/usr/src/app
ENV APP_USER=appuser
RUN groupadd $APP_USER \
&& useradd -g $APP_USER $APP_USER \
&& mkdir -p ${APP}
# Copy the compiled binaries into the new container.
COPY --from=builder /bot/target/release/bot ${APP}/bot
RUN chown -R $APP_USER:$APP_USER ${APP}
USER $APP_USER
WORKDIR ${APP}
CMD ["./trampoline"]

Creating an Elastic search image. Issue with non-root user and image size

Solution - Used Multi-stage build file to reduce the size significantly. solution pasted below
Edit: Already played with default elastic search image, But the purpose of this exercise is to learn working of Dockerfile. That is how i noticed my image being much larger (2.5G) than official image (742 M)
New to dockers/containers landscape ( was using vagrants till now ) .
To get better understanding of Dockerfile working, decided to create an ES image ( similar to ones i created in the past for a vagrant box ) .
Can someone help in reviewing the docker file and answer following issues being encountered.
Running ES as root is not allowed and running it from /home/newuser gives following error.
What am i missing here ? How should i create a new user/group to resolve this issue.
newuser#9f5820d430eb:~$ elasticsearch
Exception in thread "main" java.lang.RuntimeException: starting java failed with [1]
output:
[0.000s][error][logging] Error opening log file 'logs/gc.log': Permission denied
[0.001s][error][logging] Initialization of output 'file=logs/gc.log' using options 'filecount=32,filesize=64m' failed.
error:
Invalid -Xlog option '-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m', see error log for details.
Error: Could not create the Java Virtual Machine.
The dockerfile is installing JDK and ES but the images size is over 2GB. Can this be reduced . Found something called multi-stage images. not sure how to fit that concept in my dockerfile.
In vagrant provisions i update paths to /etc/environment.
Should this be done for containers ? I am not sure if it adds any value.
Docerkfile -
# Base image stage 1
FROM ubuntu
#MAINTAINER demo#gmail.com
LABEL maintainer="demo#foo.com"
############################################
### Install openjava
############################################
#RUN apt-get update
ARG JAVA_HOME=/opt/java
ARG JDK_PACKAGE=openjdk-14.0.2_linux-x64_bin.tar.gz
# setup paths
ENV JAVA_HOME $JAVA_HOME
# Setup JAVA_HOME, this is useful for docker commandline
ENV PATH $PATH:$JAVA_HOME/bin
## write to environment file for all future sessions
# sudo /bin/sh -c 'echo JAVA_HOME="/opt/java/" >> /etc/environment'
# sudo /bin/sh -c '. /etc/environment ; echo PATH="$JAVA_HOME/bin:$PATH" >> /etc/environment'
## download open java
# ADD https://download.java.net/java/GA/jdk14.0.2/205943a0976c4ed48cb16f1043c5c647/12/GPL/$JDK_PACKAGE /
# ADD $JDK_PACKAGE /
COPY $JDK_PACKAGE /
RUN mkdir -p $JAVA_HOME/ && \
tar -zxf /$JDK_PACKAGE --strip-components 1 -C $JAVA_HOME && \
rm -f /$JDK_PACKAGE
############################################
### Install elastic search
############################################
ARG ES_HOME=/opt/elasticsearch
ARG ES_PACKAGE=elasticsearch-7.10.1-linux-x86_64.tar.gz
# setup paths
ENV ES_HOME $ES_HOME
# Setup ES_HOME, this is useful for docker commandline
ENV PATH $PATH:$ES_HOME/bin
##write to environment file for all future sessions
#sudo /bin/sh -c 'echo ES_HOME="/opt/elasticsearch/" >> /etc/environment'
#sudo /bin/sh -c '. /etc/environment ; echo PATH="$ES_HOME/bin:$PATH" >> /etc/environment'
## download es
# ADD https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.1-linux-x86_64.tar.gz /
# ADD $JDK_PACKAGE /
COPY $ES_PACKAGE /
RUN mkdir -p $ES_HOME/ && \
tar -zxf /$ES_PACKAGE --strip-components 1 -C $ES_HOME && \
rm -f /$ES_PACKAGE
# Mount elasticsearch.yml config
ADD config/elasticsearch.yml /elasticsearch/config/elasticsearch.yml
#ADD config/elasticsearch.yml /
############################################
### Others
############################################
# Expose ports
EXPOSE 9200
EXPOSE 9300
## give permission to entire elasticsearch setup directory
RUN chmod 755 -R $ES_HOME
RUN chmod 755 -R $JAVA_HOME
RUN chmod 755 -R /var/log
# add non root user
RUN useradd newuser --create-home --shell /bin/bash
RUN echo 'newuser:newpassword' | chpasswd
RUN adduser newuser sudo
USER newuser
WORKDIR /home/newuser
# Define default command.
#CMD ["elasticsearch"]
Solution -
Multi stage build file with non-root user
ARG JAVA_HOME=/opt/java
ARG JDK_PACKAGE=openjdk-14.0.2_linux-x64_bin.tar.gz
ARG ES_HOME=/opt/elasticsearch
ARG ES_PACKAGE=elasticsearch-7.10.1-linux-x86_64.tar.gz
#MAINTAINER demo#gmail.com
#LABEL maintainer="demo#foo.com"
############################################
### Install openjava
############################################
# Base image stage 1
FROM ubuntu as jdk
ARG JAVA_HOME
ARG JDK_PACKAGE
WORKDIR /opt/
## download open java
# ADD https://download.java.net/java/GA/jdk14.0.2/205943a0976c4ed48cb16f1043c5c647/12/GPL/$JDK_PACKAGE /
# ADD $JDK_PACKAGE /
COPY $JDK_PACKAGE .
RUN mkdir -p $JAVA_HOME/ && \
tar -zxf $JDK_PACKAGE --strip-components 1 -C $JAVA_HOME && \
rm -f $JDK_PACKAGE
############################################
### Install elastic search
############################################
# Base image stage 2
From ubuntu as es
#ARG JAVA_HOME
ARG ES_HOME
ARG ES_PACKAGE
WORKDIR /opt/
## download es
# ADD https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.1-linux-x86_64.tar.gz /
# ADD $JDK_PACKAGE /
COPY $ES_PACKAGE .
RUN mkdir -p $ES_HOME/ && \
tar -zxf $ES_PACKAGE --strip-components 1 -C $ES_HOME && \
rm -f $ES_PACKAGE
# Mount elasticsearch.yml config
ADD config/elasticsearch.yml /opt/elasticsearch/config/elasticsearch.yml
############################################
### final
############################################
From ubuntu as finalbuild
ARG JAVA_HOME
ARG ES_HOME
ARG ES_PACKAGE
WORKDIR /opt/
# get artifacts from previous stages
COPY --from=jdk $JAVA_HOME $JAVA_HOME
COPY --from=es $ES_HOME $ES_HOME
# Setup JAVA_HOME, this is useful for docker commandline
ENV JAVA_HOME $JAVA_HOME
ENV ES_HOME $ES_HOME
# setup paths
ENV PATH $PATH:$JAVA_HOME/bin
ENV PATH $PATH:$ES_HOME/bin
# Expose ports
EXPOSE 9200
EXPOSE 9300
# Define mountable directories.
#VOLUME ["/data"]
## give permission to entire elasticsearch setup directory
RUN useradd newuser --create-home --shell /bin/bash && \
echo 'newuser:newpassword' | chpasswd && \
chown -R newuser $ES_HOME $JAVA_HOME && \
chown -R newuser:newuser /home/newuser && \
chmod 755 /home/newuser
#chown -R newuser:newuser /home/newuser
#chown -R newuser /home/newuser && \
USER newuser
WORKDIR /home/newuser
#RUN chown -R newuser /home/newuser
#RUN apt-get update && \
# apt-get install -yq curl
# Define default command.
CMD ["elasticsearch"]
The concept of docker is that you have tons of out-of-the-box images ready for you!
Why do you want to build your own Dockerfile for a common tech like Elasticsearch?
Why not simply:
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.10.1
and you have the image ready locally for run?
You can read more about running Elasticseach with docker here.
BTW, this image size is ~774MB
EDIT:
If it's for learning purpose, I can recommend dive which can analyze baked images (like the elasticsearch:7.10.1 and shows you each step of the image build (in other words, the dockerfile that built that image) and the base image it start FROM.

Can't build openjdk:8-jdk image directly

I'm slowly making my way through the Riot Taking Control of your Docker Image tutorial http://engineering.riotgames.com/news/taking-control-your-docker-image. This tutorial is a little old, so there are some definite changes to how the end file looks. After hitting several walls I decided to work in the opposite order of the tutorial. I successfully folded the official jenkinsci image into my personal Dockerfile, starting with FROM: openjdk:8-dk. But when I try to fold in the openjdk:8-dk file into my personal image I receive the following error
E: Version '8u102-b14.1-1~bpo8+1' for 'openjdk-8-jdk' was not found
ERROR: Service 'jenkinsmaster' failed to build: The command '/bin/sh
-c set -x && apt-get update && apt-get install -y openjdk-8-jdk="$JAVA_DEBIAN_VERSION"
ca-certificates-java="$CA_CERTIFICATES_JAVA_VERSION" && rm -rf
/var/lib/apt/lists/* && [ "$JAVA_HOME" = "$(docker-java-home)" ]'
returned a non-zero code: 100 Cosettes-MacBook-Pro:docker-test
Cosette$
I'm receiving this error even when I gave up and directly copied and pasted the openjdk:8-jdk Dockerfile into my own. My end goal is to bring my personal Dockerfile down to the point that it starts FROM debian-jessie. Any help would be appreciated.
My Dockerfile:
FROM buildpack-deps:jessie-scm
# A few problems with compiling Java from source:
# 1. Oracle. Licensing prevents us from redistributing the official JDK.
# 2. Compiling OpenJDK also requires the JDK to be installed, and it gets
# really hairy.
RUN apt-get update && apt-get install -y --no-install-recommends \
bzip2 \
unzip \
xz-utils \
&& rm -rf /var/lib/apt/lists/*
RUN echo 'deb http://deb.debian.org/debian jessie-backports main' > /etc/apt/sources.list.d/jessie-backports.list
# Default to UTF-8 file.encoding
ENV LANG C.UTF-8
# add a simple script that can auto-detect the appropriate JAVA_HOME value
# based on whether the JDK or only the JRE is installed
RUN { \
echo '#!/bin/sh'; \
echo 'set -e'; \
echo; \
echo 'dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"'; \
} > /usr/local/bin/docker-java-home \
&& chmod +x /usr/local/bin/docker-java-home
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
ENV JAVA_VERSION 8u102
ENV JAVA_DEBIAN_VERSION 8u102-b14.1-1~bpo8+1
# see https://bugs.debian.org/775775
# and https://github.com/docker-library/java/issues/19#issuecomment-70546872
ENV CA_CERTIFICATES_JAVA_VERSION 20140324
RUN set -x \
&& apt-get update \
&& apt-get install -y \
openjdk-8-jdk="$JAVA_DEBIAN_VERSION" \
ca-certificates-java="$CA_CERTIFICATES_JAVA_VERSION" \
&& rm -rf /var/lib/apt/lists/* \
&& [ "$JAVA_HOME" = "$(docker-java-home)" ]
# see CA_CERTIFICATES_JAVA_VERSION notes above
RUN /var/lib/dpkg/info/ca-certificates-java.postinst configure
# Jenkins Specifics
# install Tini
ENV TINI_VERSION 0.9.0
ENV TINI_SHA fa23d1e20732501c3bb8eeeca423c89ac80ed452
# Use tini as subreaper in Docker container to adopt zombie processes
RUN curl -fsSL https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini-static -o /bin/tini && chmod +x /bin/tini \
&& echo "$TINI_SHA /bin/tini" | sha1sum -c -
# Set Jenkins Environmental Variables
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_SLAVE_AGENT_PORT 50000
# jenkins version being bundled in this docker image
ARG JENKINS_VERSION
ENV JENKINS_VERSION ${JENKINS_VERSION:-2.19.1}
# jenkins.war checksum, download will be validated using it
ARG JENKINS_SHA=dc28b91e553c1cd42cc30bd75d0f651671e6de0b
ENV JENKINS_UC https://updates.jenkins.io
ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log
ENV JAVA_OPTS="-Xmx8192m"
ENV JENKINS_OPTS="--handlerCountMax=300 --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war"
# Can be used to customize where jenkins.war get downloaded from
ARG JENKINS_URL=http://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
# Jenkins is run with user `jenkins`, uid = 1000. If you bind mount a volume from the host or a data
# container, ensure you use the same uid.
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME /var/jenkins_home
# `/usr/share/jenkins/ref/` contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p /usr/share/jenkins/ref/init.groovy.d
# Install Jenkins. Could use ADD but this one does not check Last-Modified header neither does it
# allow to control checksum. see https://github.com/docker/docker/issues/8331
RUN curl -fsSL ${JENKINS_URL} -o /usr/share/jenkins/jenkins.war \
&& echo "${JENKINS_SHA} /usr/share/jenkins/jenkins.war" | sha1sum -c -
# Prep Jenkins Directories
USER root
RUN chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref
RUN mkdir /var/log/jenkins
RUN mkdir /var/cache/jenkins
RUN chown -R ${group}:${user} /var/log/jenkins
RUN chown -R ${group}:${user} /var/cache/jenkins
# Expose ports for web (8080) & node (50000) agents
EXPOSE 8080
EXPOSE 50000
# Copy in local config filesfiles
COPY init.groovy /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy
COPY jenkins-support /usr/local/bin/jenkins-support
COPY jenkins.sh /usr/local/bin/jenkins.sh
# NOTE : Just set pluginID to download latest version of plugin.
# NOTE : All plugins need to be listed as there is no transitive dependency resolution.
# from a derived Dockerfile, can use `RUN plugins.sh active.txt` to setup
# /usr/share/jenkins/ref/plugins from a support bundle
COPY plugins.sh /usr/local/bin/plugins.sh
RUN chmod +x /usr/local/bin/plugins.sh
RUN chmod +x /usr/local/bin/jenkins.sh
# Switch to the jenkins user
USER ${user}
# Tini as the entry point to manage zombie processes
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
Try a JAVA_DEBIAN_VERSION of 8u111-b14-2~bpo8+1
Here's what happens: when you build the docker file, docker tries to execute all the lines in the dockerfile. One of those is this apt command: apt-get install -y openjdk-8-jdk="$JAVA_DEBIAN_VERSION". This comand says "Install OpenJDK version $JAVA_DEBIAN_VERSION, exactly. Nothing else.". This version is no longer available in Debian repositories, so it can't be apt-get installed! I believe this happens with all packages in official mirrors: if a new version of the package is released, the older version is no longer around to be installed.
If you want to access older Debian packages, you can use something like http://snapshot.debian.org/. The older OpenJDK package has known security vulnerabilities. I recommend using the latest version.
You can use the latest version by leaving out the explicit version in the apt-get command. On the other hand, this will make your image less reproducible: building the image today may get you u111, building it tomorrow may get you u112.
As for why the instructions worked in the other Dockerfile, I think the reason is that at the time the other Dockerfile was built, the package was available. So docker could apt-get install it. Docker then built the image containing the (older) OpenJDK. That image is a binary, so you can install it, or use it in FROM without any issues. But you can't reproduce the image: if you were to try and build the same image yourself, you would run into the same errors.
This also brings up an issue about security updates: since docker images are effectively static binaries (built once, bundle in all dependencies), they don't get security updates once built. You need to keep track of any security updates affecting your docker images and rebuild any affected docker images.

Resources